Army’s Relationship with Anthropic Under Scrutiny
The Department of the Army is contemplating ending its relationship with AI firm Anthropic, categorizing it as a supply chain risk. This would compel other companies working with the U.S. military to sever ties with the AI startup.
According to a report, a senior official at the Pentagon suggested that Army Secretary Pete Hegseth is close to making a decision regarding the Army’s partnership with Anthropic, which developed the Claude AI system. Designating a company as a supply chain risk is an uncommon move typically reserved for foreign adversaries, not domestic tech companies.
“It will be very painful to disentangle them. We will make sure they pay the price for forcing our hand in this way,” the official indicated.
Sean Parnell, the lead spokesperson for the Pentagon, affirmed that the relationship is under review. He stated that, “The relationship between the Department of the Army and Anthropic is being reviewed. Our nation expects our partners to be willing to help our warfighters win any conflict. Ultimately, this is about the safety of our military and the American people.”
This matter is significant because Anthropic’s Claude AI is currently the only AI model integrated into classified military systems. Its technology is already deeply embedded in military operations and was notably used in a raid against Venezuela’s Nicolas Maduro in January. Pentagon officials have praised Claude’s capabilities, complicating any potential split.
Disputes have arisen from prolonged negotiations between Anthropic and the Department of Defense regarding the terms of Claude’s military use. Central to the controversy is a basic disagreement on the acceptable applications of AI tech. Anthropic’s CEO Dario Amodei has expressed a willingness to find solutions by easing certain restrictions, yet the company maintains specific boundaries on how its technology may be applied.
Anthropic seeks assurances that its tools won’t support mass surveillance of U.S. citizens or lead to the creation of autonomous weapons systems without human oversight. However, the Pentagon argues that these limits impose unworkable barriers filled with ambiguous scenarios. Military officials are urging Anthropic, along with three other major AI firms (OpenAI, Google, and Elon Musk’s xAI), to permit military applications of their technology for “all lawful purposes.”
Sources familiar with the situation report that senior defense officials have long harbored grievances against Anthropic and are now seizing the chance to confront these issues publicly.
Privacy advocates have raised alarms about how existing surveillance laws intersect with AI capabilities. Current mass surveillance regulations preceded AI technology, and the Pentagon already has broad authority to gather various types of information, from social media to security clearance data. Critics are concerned that AI could significantly enhance governmental targeting of civilians.
An Anthropic spokesperson noted that discussions are ongoing, saying, “We are having good faith and productive conversations with the DoD about how to continue our work and appropriately resolve these new and complex issues.” The spokesperson highlighted the company’s commitment to supporting national security applications of advanced AI technology, mentioning Claude’s pioneering role as the first AI system used in a classified network.
If Anthropic is officially designated as a supply chain risk, the ripple effect could cause considerable disruption across the defense contracting landscape. Numerous companies that work with the Department of Defense may have to verify whether they utilize Claude in their operations. This could pose a considerable challenge since Anthropic has significant market penetration, reportedly with eight of the ten largest U.S. companies using Claude in their workflows.





