OpenAI Backs Liability Protection Bill in Illinois
OpenAI is backing a bill in Illinois aimed at protecting AI firms from legal repercussions, especially in cases of significant societal harm, like mass fatalities or severe economic losses.
The company, known for creating ChatGPT, offered support for Senate Bill 3444 during legislative testimony. This bill would exempt certain AI developers from liability for major harms inflicted by their systems, under specific conditions. Some policy analysts view this as a shift in OpenAI’s legislative strategy, which has previously concentrated on resisting laws that might increase liability for AI companies.
SB 3444 characterizes serious harm as an event that results in death or significant injury to 100 or more individuals, or property damage exceeding $1 billion. If the proposed legislation passes, AI organizations would be shielded from legal liability as long as they neither knowingly nor recklessly provoke such events and post safety and transparency reports on their websites. The bill specifically defines a “frontier model” as one necessitating over $100 million in computational expenses, likely affecting large AI firms such as OpenAI, Google, xAI, Anthropic, and Meta.
The legislation raises concerns over specific scenarios pertinent to the AI sector, including potential misuse of AI by malicious entities to create advanced weaponry. It also addresses instances where an AI model autonomously commits acts that would be illegal for humans, provided these acts result in extreme outcomes as defined by the bill.
“We endorse this approach since it zeroes in on what matters: reducing severe risks posed by cutting-edge AI while enabling this technology to be accessible to all businesses in Illinois,” stated Jamie Raddis, an OpenAI spokesperson, in an email. “It also helps us avoid a patchwork of state regulations, steering us toward more consistent national standards.”
Caitlin Niedermayer, part of OpenAI’s Global Affairs team, reiterated support for the bill and advocated for federal regulation concerning AI. Her remarks align with previous resistance from the Trump administration against conflicting state AI safety laws. Niedermayer insisted on the necessity of avoiding a confusing mixture of state mandates that, according to her, could complicate rather than enhance safety. She also indicated that state laws could hold merit if they contribute to harmonizing with federal regulations.
“At OpenAI, our guiding principle in frontier regulation should be the safe deployment of innovative models while upholding America’s leadership in the technology sector,” Niedermayer claimed.
Scott Wisor, the policy director at the Secure AI Project, expressed his doubts about the bill’s chances of passing. He told Wired that a survey conducted in Illinois revealed 90% of respondents disagreed with exempting AI companies from liability. Wisor remarked that Illinois has a history of stringent technology regulations, such as the Credential Privacy Act and recent laws that limit AI use in mental health, indicating the likelihood of the bill’s failure.
Currently, the legal framework around AI liability in the United States is still largely ambiguous. No federal or state regulations clearly outline whether AI developers can be held accountable for significant damage caused by their technologies. While Illinois is proposing this shield, some states are moving in the opposite direction. Laws in California and New York, for instance, mandate that AI developers submit safety and transparency reports, aiming to increase accountability.
The question of AI liability isn’t confined to cases with mass casualties but extends to individual harm as well. OpenAI is facing litigation initiated by the family of a child who committed suicide after developing an unhealthy relationship with ChatGPT.
Additionally, previous reports indicated that OpenAI is being sued by families of victims from a Canadian school shooting, alleging that the company was aware of the attacker’s intentions but did not alert authorities.
In his bestselling book, Winton Hall discusses how AI has evolved beyond mere tools to become significant political forces. He emphasizes the need for a serious response to AI’s impact, cautioning against dismissing its potential as merely hype or a sophisticated tool. Hall contends that AI’s developers are manipulating discussions, controlling economies, and creating new security challenges.


