SELECT LANGUAGE BELOW

Chinese AI laboratories alleged to have taken ideas from Anthropic’s Claude chatbot

Chinese AI laboratories alleged to have taken ideas from Anthropic's Claude chatbot

US AI Company Discovers Security Breach by Chinese Institutes

As the US government tightens export controls to protect its lead in artificial intelligence, the company Anthropic has revealed that its China-based research institutes have found alternative means to leverage advanced US technologies. A report uncovered by Fox News Digital indicates that firms like DeepSeek, Moonshot AI, and MiniMax created around 24,000 fake accounts to generate over 16 million interactions with Anthropic’s Claude chatbot in a strategic effort to extract high-value model insights.

The implications of this activity extend beyond just unauthorized access to American resources. Anthropic believes that models derived from large-scale distillation could lack the safety features inherent in US AI systems. They warned that foreign labs adapting the American model might integrate these unregulated capabilities into military and surveillance initiatives, potentially equipping authoritarian governments with advanced technology for offensive operations.

Kyrsten Sinema has voiced concerns about America’s position in the tech race, noting that falling behind could lead adversaries to instill AI with “Chinese values.”

In its findings, Anthropic pointed to evidence such as IP address analysis and server metrics that starkly differed from typical customer traffic patterns. The effort specifically targeted Claude’s most sophisticated functions, encompassing complex reasoning and coding tasks, rather than simpler user interactions.

Jacob Klein, Anthropic’s head of threat intelligence, expressed confidence in the identification of these large-scale distillation attacks. Distillation is generally used to train less powerful AI models based on outputs from more robust ones. While Frontier Labs frequently engage in this process to create affordable system variants, Anthropic deems the discovered campaign unauthorized and a shortcut around extensive research efforts.

Throughout the three operations, the 16 million exchanges spanned several weeks to months. Anthropic acted upon detecting these unusual patterns but admitted that a more significant issue looms ahead. “There’s no quick fix for this,” Klein remarked, adding, “We view this as a challenge for humanity at large.” Although it remains unclear how much these Chinese labs have improved their models, Klein asserts that their advancement is “meaningful” and “significant.”

The findings bring to light questions regarding the effectiveness of current US export regulations, which primarily focus on limiting China’s access to sophisticated AI chips and the direct transfer of model weights. Klein highlighted that distillation targets a critical competitive advantage, more so than just the chips themselves. “Computing plays a part,” he stated, “but as we progress, reinforcement learning becomes increasingly crucial. Distillation enables access to these capabilities.”

Anthropic shared its insights with relevant government and industry stakeholders. Klein proposed that naming the institutes involved might lead to “thoughtful government action” or, at the very least, encourage engagement with the implicated firms.

While the company stated there is no direct evidence of coordination by the Chinese government, it noted that proxy services in China sell access to US Frontier AI models without concealment. Despite US efforts to slow China’s advancement in AI through restrictions on essential chips, Anthropic contended that foreign researchers could still mimic certain model intelligence by continually processing queries and training their systems using the results.

Meanwhile, on February 12, OpenAI informed the House Select Committee on the Chinese Communist Party that Chinese startup DeepSeek had systematically “stolen” intellectual property via distillation methods. OpenAI alleged that DeepSeek employed third-party routers and evasion techniques to bypass geographic access barriers and gather output from ChatGPT.

That same day, Google’s Threat Intelligence Group reported a “distillation attack” aimed at its Gemini models, observing a campaign with over 100,000 user prompts trying to replicate the model’s reasoning skills. Google attributed this activity to both private enterprises and entities linked to state interests.

This situation suggests that distillation is becoming a critical issue in the US-China AI rivalry, raising concerns about how American technology can be safeguarded, even without transferring model weights or next-gen chips directly. Anthropic has recently come under scrutiny regarding its AI models’ applications in military operations. Secretary of the Army Pete Hegseth planned to meet with CEO Dario Amodei to establish conditions for military use of Claude. While some administration officials indicated that Anthropic had reservations about the model’s involvement in a US operation against Venezuelan leader Nicolas Maduro, the company clarified that their disagreements were related to issues concerning mass surveillance and fully autonomous weapons.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News