Humanity Critiques Trump Administration’s AI Policy While Seeking Defense Contracts
Humanity, a company focused on artificial intelligence and safety, has notably criticized the Trump administration’s AI policies, yet it seems to be quietly pursuing lucrative defense contracts.
This company, recognized for its chatbot named Claude, presents itself as a “safety-first company.” They assert that AI presents significant risks but, if handled with care, could yield remarkable benefits for society.
Humanity also has strong ties to the Democratic Party, with key figures like Dustin Moskovitz, a prominent donor and Facebook co-founder, playing a significant role. Moskovitz has funded various Democratic campaigns, including those of Joe Biden and Kamala Harris. The company’s leadership features individuals from the former Biden administration, contributing to an insider feel concerning the Democratic agenda.
Notable figures include Tarun Chabra, a former Biden National Security Council official, and Elizabeth Kelly, who has served in economic advisory roles. Reed Hastings, co-founder of Netflix and a major Democratic donor, is also part of the board.
CEO Dario Amodei has openly criticized Trump, referring to him as a “clown” and a “feudal warrior,” suggesting that he exploits his authority for personal gain instead of serving the public. Interestingly, Amodei had indicated an intent to stay quiet during the election, aiming to maintain the ability to work across party lines.
On June 5, Amodei penned an opinion piece in the New York Times, challenging Trump’s ten-year moratorium on AI regulations. Instead, he advocated for greater transparency regarding AI practices and capabilities.
On the same day, Humanity announced the development of customized AI models specifically designed for U.S. national security applications.
White House AI and Crypto Czar David Sacks commented in a podcast that Humanity is supportive of a “global AI governance” approach. However, he criticized the notion that the government should take greater control over AI, believing that such power could have adverse consequences for innovation.
Sacks did acknowledge that some might dispute this outlook, yet warned against letting government controls stifle innovation, particularly regarding potential “X risks,” or the dangers of uncontrollable AI development.
He emphasized that one significant threat from AI arises from the possibility that the government could misuse it to exert control over individuals.
Despite the tensions with the administration, Humanity has actively pursued government contracts and has collaborated with companies like Palantir to offer Claude for various public sector uses.
Amodei expressed a desire to avoid a scenario where AI is indiscriminately used in military contexts, yet he is still choosing to engage with defense and intelligence sectors, citing a sense of responsibility. It remains uncertain how Humanity will navigate future contracts as the Trump administration’s policies appear to be misaligned with the company’s values. Attempts to reach out for commentary have, so far, been unanswered.



