Federal Judge Blocks DOD’s Classification of Anthropic
A federal judge has put a stop to the Department of Defense’s attempt to classify AI company Anthropic as a supply chain risk, marking a significant, albeit temporary, victory in the company’s fight against the Department of the Army.
According to a report, U.S. District Judge Rita Lin issued a preliminary injunction on Thursday that halts the Defense Department’s designation of supply chain risks related to human activities. This decision brings some immediate relief to AI businesses that claimed the classification posed serious harm to their operations and reputations.
Anthropic sought this injunction, arguing that the designation might lead partners to reconsider their contracts and could push major federal agencies away from using Claude, Anthropic’s AI platform. The company contends that this preliminary injunction is essential for protecting its reputation and ensuring more stability for future business relationships.
The legal dispute is complex, with ongoing litigation in courts in Washington, D.C. Anthropic is challenging the Department of Defense’s actions on both constitutional and procedural fronts, alleging that Army Secretary Pete Hegseth and the department infringed upon First Amendment rights and procurement laws with their designation of supply chain risk.
In her written order, Judge Lin discussed the broader constitutional concerns tied to the Department of Defense’s actions. She remarked that there’s nothing in the applicable law supporting what she described as an Orwellian notion that permits the government to label U.S. companies as potential adversaries solely for opposing government policy.
Earlier, there were reports indicating that government lawyers dismissed the notion that Anthropic’s First Amendment rights were violated.
In a recent filing, attorneys for the government clarified that the disagreement with Anthropic arose from the company’s behavior during contract negotiations, rather than its stances on issues like mass surveillance or autonomous weapons. They asserted that the Pentagon was merely exercising its valid authority to select suitable defense contractors.
Referring to Anthropic’s claims regarding the First Amendment, the government’s lawyers said that constitutional protections do not enable companies to dictate contract terms to the government. They contended that Anthropic hadn’t offered any legal precedents to support what they called a radical interpretation of those rights.
An Anthropic spokesperson welcomed the decision from the court, expressing appreciation for the swift action and noting that the court’s position suggests the company is likely to prevail in its case. They added that while the lawsuit was needed to safeguard Anthropic, its customers, and partners, the focus remains on collaborating productively with the government to ensure the responsible advancement of AI for everyone.
During a court hearing earlier this week, Judge Lin voiced skepticism regarding the alignment of punitive measures concerning human concerns with genuine national security needs. She questioned why extensive restrictions were necessary if the Pentagon could simply choose to halt Claude’s services directly.
The implications of the government’s classification extend beyond just banning the use of Anthropic’s products within the Defense Department; it would require companies engaged with the department to sever connections with Anthropic, profoundly affecting AI companies’ market relationships and positions.





