Emile Michael Named Chief Technology Officer at Defense Department
Emile Michael’s recent appointment as the Defense Department’s chief technology officer has shed light on the decline of Anthropic, a company once aligned closely with the Department. Michael noted that Anthropic was viewed as the Biden administration’s “chosen winner” and aimed to be an intermediary between the command structure and military personnel.
On a recent episode of the Alex Marlowe Show, the Breitbart News editor-in-chief highlighted how the Pentagon had grown increasingly dependent on Anthropic before it was blacklisted by former President Trump, who described the company as a “woke organization on the radical left.” Marlowe pressed Michael about the risks tied to such reliance on a single provider.
Michael elaborated on an executive order from the Biden administration intended to create a limited number of AI providers under tighter control. “Anthropic was among those selected due to its political philosophy,” he stated.
The U.S. Under Secretary of Defense for Research and Engineering shared insights on his experiences, recalling a “sacred moment” upon securing a pivotal contract. “This deal effectively stopped the War Department from engaging in critical scientific work,” he explained, adding that it prompted him to seek solutions to build a more resilient structure. He conveyed a message to various companies: reliance on just one provider was a vulnerability.
When I secured that contract, it became clear that we needed to explore alternatives. The goals we pursued had to be attainable, and I aimed to foster a competitive environment that served our needs.
Michael reflected on his lengthy negotiations with Anthropic, contrasting it with the quicker responses from other companies like Google and OpenAI. “They were determined to control interactions between command and field personnel,” he remarked, adding that their ultimate aim seemed to be dominance over military decision-making.
Marlowe’s incredulity led him to question, “Are they essentially dictating military decisions?” He characterized the scenario as disturbing, suggesting it echoed an “Orwellian” narrative where corporations appeared to exert undue influence over government actions.
Michael reinforced this view, highlighting how exhaustive terms of service could grant AI companies the authority to disable their software during critical operations if government actions conflicted with specified restrictions.
For instance, they could decide our use of the tool for sensitive operations in Venezuela was unacceptable. Such abilities to intervene automatically pose substantial risks, especially when the software behaves in unpredictable ways.
When asked about supply chain implications tied to Anthropic, Michael responded logically, asserting the importance of diversifying providers. The Pentagon’s focus on risks associated with dependency has shifted under the Trump administration, emphasizing the elimination of potential insider threats from crucial suppliers.
“Model addiction,” he warned, could arise if a model is manipulated, leading to potentially damaging miscalculations that could reach military personnel.
Ultimately, I can’t endorse that risk. We must ensure our operations remain secure and robust.
Michael acknowledged that while it might be acceptable for Boeing to utilize Anthropic for commercial aircraft, it was not an option for military contracts where any disruption in the supply chain could have serious consequences.
