SELECT LANGUAGE BELOW

Federal judge pauses Pentagon’s designation of AI company Anthropic as a supply chain threat

Federal judge pauses Pentagon's designation of AI company Anthropic as a supply chain threat

Federal Judge Temporarily Blocks Pentagon’s Action Against Anthropic

SAN FRANCISCO — A federal judge has ruled in favor of Anthropic, an artificial intelligence company, stopping the Pentagon from labeling it as a supply chain risk.

U.S. District Judge Rita Lin issued the ruling, stating she would also prevent enforcement of a directive from former President Donald Trump that directs federal agencies to cease using Anthropic and its chatbot, Claude.

Judge Lin criticized the “broad punitive measures” taken by the Trump administration and Defense Secretary Pete Hegseth, suggesting they seemed arbitrary and could severely impact Anthropic. She noted that Hegseth’s use of a rare military authority was usually employed against foreign adversaries.

Lin observed, “Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government.”

This ruling came after a 90-minute hearing where Lin questioned the rationale behind the Trump administration’s notable punitive move against Anthropic following troubled negotiations on a defense contract. The company’s insistence on preventing its technology from being used in fully autonomous weapons or for surveilling Americans seemed to be a sticking point.

Anthropic sought an emergency order, claiming that the negative label was a result of an “unlawful campaign of retaliation,” leading the company to take legal action against the Trump administration earlier this month. The Pentagon contended it should retain the right to use Claude as it sees fit.

Lin clarified that her decision focused on the government’s actions rather than the public policy issues themselves.

“If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude. Instead, these measures appear designed to punish Anthropic,” she wrote.

Anthropic also has a separate case still pending in the federal appeals court in Washington, D.C., related to another Pentagon rule aimed at designating it a supply chain risk.

The judge’s order is set to be delayed for a week, allowing the Pentagon the flexibility to either use Anthropic’s products or transition to other AI providers.

In a statement, Anthropic expressed gratitude for the court’s swift action and noted that the judgment indicates the company is likely to prevail in its argument. They emphasized that the case is crucial for protecting their business and clientele but reiterated their commitment to collaborating with the government to ensure safe AI for all Americans.

The Pentagon has not yet commented on this ruling.

Several third parties, including Microsoft, industry groups, tech employees, retired military leaders, and a group of Catholic theologians, have offered legal support for Anthropic’s position.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News