Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology
On Friday, President Donald Trump issued a major directive instructing all federal agencies to immediately cease using technology developed by Anthropic, a prominent artificial intelligence (AI) company based in San Francisco.
This decision escalates an ongoing conflict between the administration and Anthropic regarding military security and what officials have labeled a “woke” corporate agenda from the firm.
The announcement was made on Truth Social just before a deadline imposed by the Pentagon, effectively prohibiting one of the nation’s leading AI developers from engaging with the federal government.
“America will never let a company of radical leftists dictate how our military operates! Such decisions belong to your Commander-in-Chief and the capable leaders I’ve appointed. Anthropic’s actions are a grave mistake that jeopardizes American lives and national security. Thus, I am directing all federal agencies to stop using their technology immediately. We won’t be working with them again,” Trump stated in his post on X.
Trump further indicated that agencies like the Department of the Army, which utilize Anthropic’s products, will have a six-month grace period to phase them out. He urged the company to cooperate during this time, warning of potential consequences if they do not.
Following this announcement, Army Secretary Pete Hegseth designated Anthropic, known for its chatbot Claude, as a “supply chain risk.”
This directive comes after weeks of tense discussions between the Department of the Army and Anthropic’s CEO, Dario Amodei. Central to the disagreement is the Department of Defense’s demand for Anthropic to modify certain “red lines” in the Claude AI model.
Hegseth argued that the military’s need for AI extends to all lawful purposes, but Anthropic’s leadership maintained that complying with such demands would compromise their ethical standards. On Thursday, Amodei emphasized that the company “cannot in good conscience comply” with the government’s requests.
The fallout has reverberated across Silicon Valley, with OpenAI’s CEO, Sam Altman, suggesting that his organization holds similar ethical lines when it comes to military applications. This raises questions about whether other AI firms will face similar pressures.
On the political front, Senator Mark Warner (D-Va.) criticized the move, labeling it as “bullying” and voicing concerns that national security is being influenced by political motivations rather than thorough analysis. Conversely, many of Trump’s supporters welcomed the directive, viewing it as a necessary step to prevent the military from being subject to the whims of Silicon Valley executives.
The designation as a “supply chain risk” could severely restrict Anthropic’s access to a significant segment of the enterprise market related to government contracts, possibly hindering their plans for an IPO later this year.
In an effort to facilitate a smooth transition, Trump is using executive powers to ensure that companies are unhindered. He has signaled that there could be “civil and criminal consequences” for any attempts to obstruct the transition process, reinforcing his administration’s commitment to maintaining stability.
Furthermore, the administration has high expectations for Anthropic’s transparency and cooperation. By urging the company’s leaders to provide full support, Trump aims to ensure continuity of services and prevent disruptions during the transition.
The mention of “civil and criminal consequences” serves as a serious reminder that the government is prepared to enforce the law strictly, which may include fines or other legal actions to guarantee a secure transition. This approach combines incentives and warnings to make sure responsibility is fully embraced, rather than resisted.
