Court Decision on Anthropic’s AI Blacklisting Challenge
A federal appeals court in Washington, D.C., has rejected Anthropic’s request for a temporary order to prevent the Pentagon from blacklisting its AI technology. This decision comes as the company is in the midst of a legal dispute with the Department of the Army.
The San Francisco-based AI firm initiated the lawsuit last month, contesting its designation as a supply chain risk. Anthropic argues that this label is a response to its efforts aimed at curbing governmental use of its technology for mass surveillance and military weaponry development.
The appeals court’s ruling noted that, “the balance of fairness here favors the government.” It further explained that the economic harm to one company is relatively limited in comparison to the Army’s need for secure AI technology amid military operations.
A previous ruling from a San Francisco federal court issued a preliminary injunction while Anthropic pursues its legal claims.
As a result of the appeals court’s decision, Anthropic has lost its ability to collaborate with the Department of the Army. Other defense contractors are also barred from utilizing the company’s Claude chatbot in their dealings with the Department of Defense, although Anthropic can still work with different government entities.
While the appeals court acknowledged the potential for “irreparable harm” to Anthropic, it emphasized that the company’s concerns seemed mostly financial. Anthropic expressed discontent, claiming that the Pentagon’s actions infringe on its free speech, but the court determined that Anthropic hadn’t sufficiently shown that its speech was suppressed during this legal process.
Despite this, the court indicated that the situation warrants further investigation due to the economic impact on Anthropic during the ongoing legal battle. An Anthropic spokesperson remarked on the court’s decision, expressing hope for a swift resolution and confidence that the supply chain designations would ultimately be deemed unlawful.
The company signed a $200 million agreement with the Department of the Army back in July, making it a key provider of AI models for the government’s classified networks. However, tensions spiked when Army Secretary Pete Hegseth criticized Anthropic during contract talks, particularly for seeking exemptions from mass surveillance and military applications.
Hegseth designated the company as a supply chain risk, a classification that has previously been applied to foreign firms considered national security threats, like Huawei.
If this designation stands, defense contractors would have to certify that they are not using Anthropic’s AI models in military-related projects.
The issue escalated last month with the leak of a 1,600-word letter from Anthropic CEO Dario Amodei, in which he criticized the Trump administration and claimed the Pentagon was retaliating for the company’s reluctance to support the former president. Following these events, Amodei offered an apology for the tone of his leaked message.
After Anthropic’s blacklisting, Sam Altman’s OpenAI quickly secured a contract to provide AI services to the government.
In the leaked memo, Amodei pointed out that while Anthropic faced repercussions for not supporting Trump, OpenAI had a more favorable relationship with the administration, noting significant donations made by its president, Greg Brockman. He also expressed skepticism about OpenAI’s security protocols, describing them as “probably 20% real and 80% safe.”
During a technology conference earlier this year, Altman criticized Amodei’s stance, suggesting that abandoning democratic commitments due to political preferences would be detrimental to society.

