A U.S. judge has temporarily halted the Pentagon’s decision to blacklist Anthropic, marking a crucial moment in the ongoing conflict between the company and the military over AI safety in combat situations.
In a lawsuit filed in California federal court, Anthropic contends that Secretary of Defense Pete Hegseth exceeded his authority by labeling the company as a national security supply chain risk. This classification is used by the government for companies that might compromise military systems through potential adversities.
This unusual action from Hegseth came after Anthropic declined to permit the military to employ its AI chatbot, Claude, for surveillance or autonomous weapons, which led to the company’s exclusion from various military contracts.
Anthropic’s leadership has expressed that this could result in massive financial losses and reputational harm for the company.
The company maintains that its AI models are not sufficiently reliable for autonomous weaponry and opposes domestic surveillance, viewing it as a violation of its rights. Conversely, the Pentagon argues that private companies shouldn’t impose limitations on military operations.
U.S. District Judge Rita Lin, appointed by former President Biden, delivered her ruling during a hearing in San Francisco. She granted Anthropic’s request for a temporary order to block the Pentagon’s designation while litigation is ongoing.
It’s worth noting that this ruling is not final, and the case is still in progress.
This situation with Anthropic is notable, as it represents the first instance of a U.S. company being designated as a supply chain risk under somewhat ambiguous government acquisition laws designed to protect military systems from foreign interference.
In its lawsuit filed on March 9, Anthropic argued that the government’s actions were retaliatory for its stance on AI safety, infringing on its First Amendment rights. They also claimed they were not afforded the chance to contest the designation, which they argue violates their Fifth Amendment due process rights.
The complaint asserts that the decision lacks legal grounding, is unsupported by evidence, and contradicts previous military endorsements of Claude.
The Justice Department, in response, argued that Anthropic’s refusal to lift restrictions introduces uncertainty regarding the Pentagon’s operations and could potentially disrupt military functionalities, as per court documents.
The government emphasized that the designation results from Anthropic’s unwillingness to comply with contract terms rather than its safety concerns regarding AI.
Apart from this lawsuit, Anthropic is also pursuing another legal challenge in Washington, D.C., focused on a separate supply chain risk designation by the Department of Defense, which could result in its exclusion from civilian government contracts.





