Anthropic accuses Chinese AI labs of mining Claude as US debates AI chip exports

Anthropic has accused three Chinese AI companies of creating more than 24,000 fake accounts to interact with its Claude AI model. The goal was to improve their own models through a technique called distillation. The companies named are DeepSeek, Moonshot AI, and MiniMax.

These labs allegedly generated over 16 million exchanges with Claude through those accounts. Anthropic stated the labs specifically targeted Claude’s most advanced capabilities, which include agentic reasoning, tool use, and coding.

The accusations emerge during ongoing debates over U.S. export controls on advanced AI chips, a policy intended to slow China’s AI progress. Distillation is a standard training method for creating smaller, cheaper versions of a model, but competitors can use it to copy the capabilities of others. Earlier this month, OpenAI sent a memo to House lawmakers also accusing DeepSeek of using distillation to mimic its products.

DeepSeek first gained significant attention a year ago with its open-source R1 reasoning model, which nearly matched the performance of leading American labs at a much lower cost. DeepSeek is expected to soon release its latest model, DeepSeek V4, which reports indicate can outperform Anthropic’s Claude and OpenAI’s ChatGPT in coding.

The scale of the alleged attacks varied. Anthropic tracked over 150,000 exchanges from DeepSeek focused on improving foundational logic and alignment, particularly around censorship-safe alternatives to policy-sensitive queries. Moonshot AI had more than 3.4 million exchanges targeting agentic reasoning, tool use, coding, data analysis, computer-use agent development, and computer vision. Last month, Moonshot AI released a new open-source model called Kimi K2.5 and a coding agent.

MiniMax’s activity involved 13 million exchanges targeting agentic coding, tool use, and orchestration. Anthropic said it observed MiniMax redirecting nearly half its traffic to extract capabilities from the latest Claude model when it launched.

Anthropic says it will continue to invest in defenses to make distillation attacks harder to execute and easier to identify. The company is also calling for a coordinated response across the AI industry, cloud providers, and policymakers.

These alleged attacks coincide with heated debate over American chip exports to China. Last month, the administration formally allowed U.S. companies like Nvidia to export advanced AI chips, such as the H200, to China. Critics argue this loosening of controls boosts China’s AI computing capacity at a critical point in the global AI race.

Anthropic claims the scale of extraction performed by DeepSeek, MiniMax, and Moonshot requires access to advanced chips. The company stated that distillation attacks reinforce the rationale for export controls, as restricted chip access limits both direct model training and the scale of illicit distillation.

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, told TechCrunch he is not surprised by these allegations. He said it has been clear that part of China’s rapid AI progress has come from theft via distillation of U.S. models, and that this should be a compelling reason to refuse selling AI chips to these companies.

Anthropic also warned that distillation threatens more than American AI dominance; it could create national security risks. The company builds systems to prevent actors from using AI for developing bioweapons or carrying out cyber attacks. Models built through illicit distillation may not retain these safeguards, allowing dangerous capabilities to proliferate without protections.

Anthropic pointed to the risk of authoritarian governments deploying frontier AI for offensive cyber operations, disinformation, and mass surveillance, a risk multiplied if such models are open-sourced. TechCrunch has reached out to DeepSeek, MiniMax, and Moonshot for comment.