Emil Michael, the Pentagon’s AI chief, expressed strong disapproval regarding Microsoft’s legal backing for Anthropic, calling the entire situation “absolutely awful.”
He criticized Microsoft’s recent court brief, which supports Anthropic against the Department of the Army, suggesting it was merely an attempt to appear aligned with employees. “It seems like they’re trying to show camaraderie with other tech companies on the left,” he noted, adding that the brief essentially states everything the Army plans to do is legal.
In documents submitted to the court, both Microsoft and Anthropic claimed their intentions were to protect service members, but Michael disagreed. He emphasized that the government bears the responsibility for keeping armed forces safe.
Michael pointed out that Anthropic was attempting to insert itself into the command structure. “Our designation of Anthropic as a supply chain risk is about removing risks, not creating new ones,” he explained.
He highlighted the oddity of the situation, especially given that Anthropic had collaborated closely with the Army until the government terminated their contract earlier this year.
Following a raid that resulted in Nicolás Maduro’s capture, Anthropic’s executives inquired with Palantir—whose Claude AI was integrated into military operations—about the use of AI during the operation.
Anthropic had been supplying custom versions of its AI to key military services like Central Command and Indo-Pacific Command under a substantial contract worth around $200 million.
According to Palantir, they informed Pentagon officials about Anthropic’s investigation, which was perceived as a sign of potential disapproval. This raised concerns regarding the dangers of relying on AI providers who might influence sensitive tasks or complicate contract negotiations.
Anthropic proposed certain safeguards to prohibit extensive domestic surveillance and fully autonomous lethal weapons deployment without human oversight. In response, the Department of Defense designated Anthropic as a “supply chain risk” on March 4 and opted to replace its technologies with those from OpenAI.
Subsequently, Anthropic filed a lawsuit on March 9, claiming the allegations were retaliatory and unlawful.
The Pentagon’s apprehension lies in the fear that contractors could impose their own beliefs on military operations. “We cannot allow a model to develop biases that might hinder military activities or oppose the decisions of our elected leaders,” Michael reiterated.
He finds it difficult to reconcile Anthropic’s claims about the potency of its technology with the company’s reluctance to work with the government. “They allege their AI is more powerful than nuclear weapons and some nations, yet they are hesitant to use that power for our country’s benefit,” he remarked, expressing disbelief.
“This wouldn’t happen in places like China… yet they resist letting the government play a crucial role in conflicts. It just feels wrong,” he added.
Michael noted the peculiarity of Anthropic’s shift after years of working with the Pentagon. “It’s puzzling why a company that didn’t want to assist the Army spent years deploying its software to them,” he remarked.
This incident illustrates how some in Silicon Valley are still reluctant to support military efforts, unlike firms like Palantir and Anduril, which actively back the U.S. and enjoy contracts for services tied to national security.
With the Department of Defense now labeling Anthropic a “supply chain risk,” typically reserved for foreign threats, OpenAI has stepped in to take over the contract.
Reflecting on his tenure, Michael mentioned he had hoped the upheaval of 2018—when Google employees protested against Project Maven—was behind them. “That incident marked a significant moment in Silicon Valley’s military relations,” he said.
However, it appears that the exceptions now outweigh the norm regarding such humanistic viewpoints. “Elon Musk, for instance, hasn’t exactly embraced AI for military purposes,” Michael observed, noting that companies like Google have adapted since 2018.
While traditional contractors continue to secure major contracts, the Pentagon is also focusing on innovative startups unburdened by the ideological conflicts that affected Anthropic. In the upcoming weeks, Michael plans to announce new partnerships with these non-traditional defense companies, which he believes could symbolize a positive turning point. “We’re facilitating the next generation of defense technology,” he concluded.
