SELECT LANGUAGE BELOW

Anthropic faces criticism from the White House over AI alerts

Anthropic faces criticism from the White House over AI alerts

Anthropic stands out in the AI sector as one of the few voices cautioning against the potential pitfalls of the technology it helps develop. Its advocacy for regulation has recently ruffled feathers, particularly within the Trump administration and among some tech allies in Silicon Valley.

While AI companies are keen on showing they can work together with the administration, officials at the White House, who prefer a more relaxed regulatory approach, have not taken kindly to the industry’s requests for engagement.

Kirsten Martin, dean at Carnegie Mellon University’s Heinz College, commented, “When industry leaders declare that they support regulation and admit that issues exist, it can make the whole industry seem a bit self-serving.” She went on to suggest that a unified standpoint across the industry would be vital to support its best interests.

This conflict became more apparent when Anthropic co-founder Jack Clark recently likened the current state of AI to a child in a dark room. He described how, when the lights are turned on, the frightening shadows morph into benign objects, implying that we are just beginning to grasp the true nature of advanced AI.

“In 2025, we’re those children, and the frightening room is our planet,” Clark elaborated. He cautioned that many people are eager to convince themselves that these advanced systems are harmless, wanting us to just “turn off the lights and go back to sleep.”

Clark’s remarks sparked strong backlash from White House AI advisor David Sachs, who accused Anthropic of using fear to push for a regulation-heavy agenda that he believes threatens the startup ecosystem.

Venture capitalist Marc Andreessen chimed in on social media, supporting Sachs’s viewpoint with a simple “truth.” Meanwhile, Sunny Madra, COO of AI chip company Groq, suggested that one disruptive player could jeopardize the entire industry.

Sriram Krishnan, a senior policy adviser on AI, critiqued the response from the AI safety community, saying that America should focus more on competing with China than on internal discrepancies.

Sachs reinforced his stance, asserting that Anthropic’s strategy consistently portrays it as adversarial to the Trump administration. He referenced past critical remarks from Anthropic’s CEO, Dario Amodei, regarding Trump’s policies.

Despite this, Amodei responded to allegations of misrepresentation by claiming that overall, AI companies and the administration share common goals in ensuring that AI technologies serve the American public and maintain U.S. leadership in the field.

He pointed to Anthropic’s support for the Trump administration’s AI Action Plan and a substantial Department of Defense contract as evidence of alignment, although he acknowledged a respectful disagreement over a proposed moratorium on state AI legislation.

Amodei’s criticism of the moratorium suggested it was overly simplistic in an era of rapid AI advancement. He expressed concern about the slow progress in federal AI regulation, which motivated his support of a recent California bill requiring AI firms to disclose safety information.

“We’re committed to engaging constructively with public policy issues,” Amodei stated, adding that as a public interest corporation, Anthropic’s mission is centered on ensuring AI benefits everyone.

The current tensions showcase Anthropic’s distinctive approach in a rapidly changing conflict in the AI realm. Founded in 2021 by ex-OpenAI workers focused on safety, the company’s policy perspectives are shaped by its foundational principles.

Sarah Kreps from Cornell pointed out that Anthropic’s focus on risk sets it apart, especially as industry attitudes shift towards faster AI adoption.

She noted that the prevailing inclination has moved toward an accelerationist mindset, contrasting with a more cautious ethos that used to be prevalent in the U.S. and Europe.

Kreps summarized, “This isn’t about right or wrong; it’s more about differing levels of risk tolerance.” She observed that while European attitudes lean towards caution, the U.S. is now adopting a more audacious stance in AI development.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News