Anthropic has warned about the risks of AI distillation attacks following allegations that three Chinese companies improperly used its Claude assistant. In a statement on its site, the firm asserted that DeepSeek, Moonshot, and MiniMax ran extensive operations aimed at secretly harnessing Claude's features to enhance their proprietary systems.

In artificial intelligence, distillation involves weaker models drawing on outputs from stronger ones during training. Although this method has legitimate uses, Anthropic highlighted its potential for harmful applications. The company reported that these three Chinese entities engaged in over 16 million interactions with Claude using around 24,000 fake profiles. Anthropic views this as rivals taking an unauthorized path to build superior AI, possibly evading built-in protections.

Through analysis of IP addresses, metadata patterns, and server details, plus input from fellow AI organizations observing comparable activities, Anthropic expressed strong certainty in connecting these efforts to the named companies.

In the prior year, OpenAI raised comparable concerns about competitors distilling its technology and subsequently blocked implicated users. Anthropic, the creator of Claude, plans to bolster its defenses to complicate such exploits and improve detection. Meanwhile, the organization contends with legal action from music rights holders, who charge it with employing unauthorized song versions in Claude's training process.