Potential Split Between Pentagon and AI Firm Anthropic
Reports indicate that Defense Secretary Pete Hegseth may soon end the Pentagon’s partnership with AI company Anthropic, possibly leading to the company’s inclusion on a blacklist. This would require other firms to discontinue using the Claude chatbot.
Tensions have escalated following months of intense discussions, with Anthropic aiming to ensure its technologies aren’t used for mass surveillance of citizens or autonomous weaponry. According to sources, the pressure point has become more pronounced.
Hegseth might label Anthropic as a “supply chain risk,” which would mean that companies wishing to work with the Army may need to cut ties with this rapidly growing AI firm, as shared by a senior Pentagon source.
“Disentangling from them will be quite a challenge, and we’ll ensure there’s a consequence for putting us in this position,” the source conveyed.
This “supply chain risk” label, typically reserved for foreign entities considered a national security threat, reflects a long-standing frustration among defense officials regarding Anthropic. They seem eager to confront the company.
In September, there were reports that Anthropic was preparing for a potential clash with the White House over allegations of left-leaning bias within the organization.
CEO Dario Amodei is noted for his Democratic Party contributions, and in March 2024, the Ford Foundation, known for its progressive stance, invested $5 million in Anthropic.
“We are evaluating the relationship with Anthropic. Our military partners should be committed to supporting our objectives,” Pentagon spokesperson Sean Parnell remarked.
“Ultimately, this is about ensuring the safety of our military personnel and citizens.”
An Anthropic representative mentioned that Claude is the only AI model utilized in classified military applications, with reports indicating its recent involvement in a U.S. operation aimed at capturing Venezuelan leader Nicolas Maduro.
“We are engaged in constructive discussions with the Department of War to address these intricate matters,” the spokesperson added.
Concerns have been raised about the risks of unrestricted AI access potentially enhancing the Pentagon’s capacity to target civilians.
The Pentagon argues that the limitations sought by Anthropic are overly restrictive, insisting that the AI tool be usable for “all lawful purposes.”
If Hegseth categorizes Anthropic as a risk, several companies may opt to sever their relationships with it to keep favorable ties with the Defense Department.
While the Pentagon’s contract with Anthropic amounts to around $200 million, it constitutes only a small part of the company’s annual revenue, which is about $14 billion.
Even as OpenAI’s ChatGPT gains traction among consumers, Anthropic is making headway in business partnerships, recently collaborating with eight of the ten largest U.S. corporations.
Switching to rival AI tools from OpenAI, Google, or xAI may not be straightforward, as a senior official noted that these alternatives are “not far behind” in terms of government use.
Though OpenAI, Google, and xAI have agreed to relax protections around chatbots for unclassified military settings, they aren’t currently utilized in classified frameworks.
There’s confidence that these AI firms will agree to adjusted standards, though discussions are still ongoing, according to a senior administration source.

