OpenAI Finalizes Agreement with Army Amid Anthropic Controversy
OpenAI, led by Sam Altman, has struck an agreement with the Department of the Army to use its AI models on classified networks. This development follows the Trump administration’s directive for the federal government to halt the use of Anthropic AI, its rival, after a dispute concerning model control.
Altman announced on social media that the agreement was finalized late Friday, just after a turbulent day that saw Anthropic banned from federal contracts due to national security concerns. “Tonight, we reached an agreement… the DoW demonstrated a deep respect for security,” Altman shared.
This situation highlights the increasingly intertwined issues of AI and national security. Earlier that same day, the Secretary of Defense had labeled Anthropic a “supply chain risk to national security,” a serious designation usually reserved for foreign adversaries. This classification mandates all Department of Defense vendors to ensure they aren’t utilizing Anthropic’s AI.
The Trump administration’s move to effectively bar Anthropic from government dealings has significant implications. Tensions had been building for weeks as the AI company negotiated with the Department of Defense. Anthropic was the first to successfully establish its model within the Army’s sensitive network, but disagreements over the terms ultimately derailed the agreement.
According to sources familiar with the talks, Anthropic sought assurances that its models wouldn’t be used for autonomous weapons or mass surveillance of U.S. citizens. However, the Defense Department was unwilling to impose those restrictions, wanting broader military applications.
Despite the controversy, Pentagon spokesperson Sean Parnell emphasized that there is no interest in deploying Anthropic’s models for illegal activities like mass surveillance. He insisted on clear consent for any lawful tech usage to protect military operations.
In a memo to OpenAI employees, Altman reflected similar concerns regarding safety, referring to them as “red lines,” on par with those raised by Anthropic. But in his announcement, he noted that the Army had agreed to address these safety issues. “Two of our most important security principles are the prohibition of domestic mass surveillance and human responsibility for the use of force,” he stated.
Altman revealed that OpenAI will integrate technical safeguards to ensure proper functioning of its models in classified settings. The company also plans to deploy personnel to oversee operations and maintain safety standards.
Looking ahead, Altman expressed a desire for consistent standards across the AI industry, urging the Army to extend similar conditions to all AI firms. “We are asking the DoW to offer similar terms… We wish to de-escalate the situation,” he remarked.
Meanwhile, Anthropic shared its disappointment over being labeled a supply chain risk, indicating plans to legally contest the designation. As AI substantially influences governmental and economic landscapes, there’s an underlying urgency—especially for conservatives—to strategize how to manage this transformative technology without compromising rights or yielding control to tech interests opposed to their values.





