US Government Flags AI Startup as National Security Risk
The U.S. government has labeled AI startup Anthropic as an unacceptable national security risk in a recent court filing, raising doubts about the company’s reliability as a partner in defense.
In a 40-page document submitted to the U.S. District Court for the Northern District of California, government attorneys detailed their rationale for rejecting Anthropic as a trusted contributor to national defense efforts. This filing represents the government’s initial formal response to a lawsuit initiated by the San Francisco-based AI company, known for its Claude chatbot system, which had previously been utilized by the Department of Defense.
Central to the government’s worries is the possibility that Anthropic might alter or deactivate its technology during wartime based on corporate agendas instead of national interests. The legal team emphasized the heightened risk of manipulation with AI systems, claiming that granting human access to the Department of the Army’s infrastructure could jeopardize the supply chain.
The friction between Anthropic and the Department of Defense surfaced during talks over a $200 million contract intended to incorporate AI into classified governmental frameworks. Anthropic set specific conditions regarding the application of its technology, explicitly stating that it opposed its AI being employed for widespread surveillance of U.S. citizens or interfacing with autonomous weapon systems. Defense officials countered that it is inappropriate for private companies to dictate how acquired technology should be deployed, noting that AI should only be used for legal purposes.
After failing to reach a consensus, Army Secretary Pete Hegseth identified Anthropic as a supply chain risk in February, effectively barring the startup from engaging in business with the government. This designation, once reserved for foreign entities viewed as a national security risk, is unprecedented for a domestic firm.
In retaliation, Anthropic filed two lawsuits on March 9: one in the U.S. District Court for the Northern District of California and another in the U.S. Court of Appeals for the District of Columbia Circuit. In these cases, the company accused the Pentagon of using the supply chain risk label as a form of ideological punishment, claiming it infringes on the company’s First Amendment rights.
The company is seeking judicial intervention to lift the government’s designation, warning that this label could prompt over 100 corporate clients to sever ties, potentially costing Anthropic billions in lost revenue. A hearing regarding Anthropic’s plea for a preliminary injunction is set for next week.
In the legal filing, government lawyers clarified that the contention with Anthropic arises from the company’s conduct during contract negotiations, rather than its proposed constraints on mass surveillance or lethal weaponry. They argued that the Pentagon is merely exercising its legitimate authority in selecting appropriate defense contractors.
Addressing Anthropic’s First Amendment claims, government lawyers contended that constitutional protections do not permit companies to unilaterally set contract conditions for the government. They asserted that Anthropic has not provided any legal foundation for what they describe as an extreme interpretation of the First Amendment rights.
In a statement on February 26, Anthropic CEO Dario Amodei defended the company’s stance, asserting that the military, and not the company, has the final say in how the technology is utilized. “We’ve never objected to specific military operations, nor have we attempted to restrict technology usage on a case-by-case basis,” Amodei noted. He later walked back several criticisms of the Trump administration and OpenAI, offering a tentative apology.
Microsoft has entered the fray by submitting an amicus brief requesting a federal court to temporarily halt the Department of Defense’s risk designation. Furthermore, a group of 37 engineers and researchers from OpenAI and Google, including Google’s Chief Scientist Jeff Dean, have filed briefs in support of Anthropic’s claims in the ongoing legal battle.
The ongoing conflict between the Pentagon and Anthropic underscores the cultural divide between the defense sector and Silicon Valley. While the tech industry has historical ties to military advancements, many companies are becoming increasingly uneasy about their technology being utilized in warfare. Recent upheavals in the AI sector have even led to the resignation of OpenAI’s head of robotics over concerns regarding its defense contracts.





