SELECT LANGUAGE BELOW

Anthropic Disapproves of Illinois AI Liability Legislation Supported by OpenAI

Anthropic Disapproves of Illinois AI Liability Legislation Supported by OpenAI

Debate Over Illinois AI Liability Bill between OpenAI and Anthropic

Anthropic has publicly expressed its opposition to an Illinois bill that has received support from OpenAI. This legislation is designed to shield AI companies from legal repercussions when their systems are implicated in significant harm, such as mass casualties or damages surpassing $1 billion.

The proposed legislation, referred to as SB 3444, has ignited a notable dispute between two leading AI firms regarding the regulation of this technology. Analysts suggest that while the bill may face challenges in passing, it underscores the escalating political friction between Anthropic and OpenAI. As both companies ramp up lobbying efforts nationwide, this conflict is becoming increasingly evident.

Reports indicated that OpenAI backed the bill earlier this week.

Jamie Raddis, a spokesperson for OpenAI, remarked in a statement, “We support such an approach because it focuses on what matters most: reducing the risk of serious harm from cutting-edge AI systems while allowing this technology to get into the hands of people and businesses large and small in Illinois. It also helps us avoid a patchwork of state-by-state regulations and move toward clearer and more consistent national standards.”

Caitlin Niedermayer from OpenAI’s Global Affairs team reiterated this stance during a testimony, advocating for federal AI regulations. Her views align with the previous administration’s resistance to inconsistent state-level AI safety laws. She emphasized avoiding “a patchwork of contradictory state requirements that can create friction without meaningfully improving safety,” while suggesting that state laws could be beneficial if they promote alignment with federal regulations.

Anthropic is now lobbying Illinois Senator Bill Cunningham, the sponsor of SB 3444, urging significant revisions or outright rejection of the bill in its current form. An Anthropic spokesperson conveyed that they oppose SB 3444 and see potential in discussing how the measure could inform future AI legislation.

“We oppose this bill,” stated Cesar Fernandez, Anthropic’s director of U.S. state and local government relations. “We know that Senator Cunningham cares deeply about AI safety, and we look forward to working with him on changes that combine transparency with real accountability to mitigate the most serious consequences from AI systems.”

This disagreement between OpenAI and Anthropic raises the critical issue of legal accountability in the event an AI system leads to a disaster—an aspect that U.S. lawmakers are just beginning to examine. SB 3444 proposes that AI labs could evade liability if a malicious entity exploits their models to inflict harm, provided they establish a safety framework and publish it online.

Some specialists caution that the bill could weaken existing legal protections aimed at curbing corporate wrongdoing. Thomas Woodside, co-founder of the Secure AI Project, pointed out, “Liability already exists under common law and provides a strong incentive for AI companies to take reasonable steps to prevent foreseeable risks.” He criticized SB 3444 for potentially erasing liability for significant harm, arguing it would not be wise to diminish this crucial legal accountability.

Last week, Anthropic supported another Illinois measure, SB 3261, which, if passed, would establish one of the strictest AI safety laws in the U.S. This bill would necessitate developers of advanced AI, including those at OpenAI and Anthropic, to create public safety and child protection plans subject to evaluation by independent auditors.

In his book, “Code Red: Left, Right, China, and the Race to Control AI,” author Winton Hall asserts that AI transcends mere tool status and represents a significant political influence.

Hall critiques the dismissive views held by some regarding AI, stating, “Some people reduce it to just a tool… I respectfully disagree.” He argues that AI’s designers are capable of imposing control over various societal aspects. “We are building a system that can disrupt, destroy jobs, promote leftist ideology, unleash new national security threats, distort human relations, intensify indoctrination, maximize surveillance capitalism, and control media and information on an unprecedented scale,” Hall explained.

Further details can be found in the original report.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News