The Pentagon has set a deadline for the artificial intelligence firm Anthropic to lift its restrictions on the military’s use of its Claude AI system, warning it could end its $200 million contract or impose other penalties if the company doesn’t comply. This ultimatum follows an incident where the Pentagon alleged that Anthropic inquired whether its products were utilized during the military operation to capture Venezuelan leader Nicolas Maduro earlier this year, hinting that the company might disapprove of such use. The Pentagon asserts that AI companies must ensure their products are available for all legitimate military applications without needing corporate oversight.
Anthropic, however, has clarified that it does not permit its products to be used for fully autonomous weapon systems or for mass surveillance of Americans.
During a meeting at the Pentagon on Tuesday, Army Secretary Pete Hegseth conveyed this ultimatum to Anthropic CEO Dario Amodei. He commended the technology but emphasized the department’s desire to continue collaborating with the company. Hegseth warned that failure to allow Claude’s use for all lawful purposes could result in contract termination, potential classification as a supply chain risk, or even the invoking of the Defense Production Act to ensure access to the necessary technology.
Currently, Claude stands as the only advanced commercial AI model integrated within the Department of Defense’s classified networks under its $200 million contract, raising concerns about heightened risks in potential future conflicts.
Pentagon representatives argue that relying on private companies to enforce specific restrictions on their technologies, even for legal purposes, is unfeasible. During the meeting, Hegseth compared it to scenarios where the military is prohibited from using certain aircraft for missions.
This disagreement highlights a pivotal moment regarding the control over advanced AI applications within U.S. defense systems—whether it lies with private firms or the Department of Defense. Such outcomes could greatly influence how the military collaborates with leading AI developers to incorporate more sophisticated machine learning technologies into national security frameworks.
Anthropic, which promotes itself as a safety-conscious AI enterprise, indicated that its policies are designed to mitigate the risks associated with the misuse of powerful AI systems.
In the conversation, Amodei elaborated on the restrictions and maintained that they would not impede legitimate military operations, although a senior Pentagon official clarified that the stance does not pertain to issues of mass surveillance or autonomous targeting, as human involvement is always present, and legal protocols are followed by the Department of Defense.
Despite the tensions, officials from both sides suggested that fully autonomous weapons are not considered within the framework of lawful military use, indicating that the underlying conflict is as much about control as it is about actual application on the battlefield.
At the meeting, Hegseth reiterated that failure to comply could lead to leveraging the Defense Production Act, terminating existing contracts with Anthropic, and categorizing the company as a supply chain risk due to reliability concerns.
Such actions could reflect the federal government’s strategies for managing dependencies on critical technologies, like AI systems essential for defense. Ending this contract could disrupt ongoing operations and compel the department to seek alternative providers for classified systems.
Pentagon officials have also struck agreements with Elon Musk’s Grok AI, allowing broader access to its technology for lawful military purposes, hinting that other advanced AI companies are nearing similar arrangements.
In a statement, Anthropic’s spokesperson mentioned that CEO Dario Amodei appreciates the department’s efforts and is engaged in good-faith discussions about usage policies to align Anthropic’s capabilities with the government’s national security objectives.


