SELECT LANGUAGE BELOW

Seven lawsuits allege that greed stopped OpenAI from preventing the Canadian school shooting.

Seven lawsuits allege that greed stopped OpenAI from preventing the Canadian school shooting.

Families File Lawsuits Against OpenAI Over School Shooting

Families affected by a school shooting in Canada have initiated seven federal lawsuits against OpenAI and its CEO, Sam Altman. They claim that the company neglected to implement adequate safety measures for its ChatGPT AI platform, prioritizing profit over the welfare of users.

The lawsuits, lodged in a California federal court, contend that OpenAI’s chatbot had a role in the tragic events leading to the deaths of eight individuals in Tumbler Ridge, British Columbia, on February 10. According to the plaintiffs, the interactions that Jesse Van Rootseller, the shooter, had with ChatGPT intensified his violent tendencies, ultimately prompting him to carry out attacks that claimed the lives of six children and two adults.

Documents within the case reveal that Van Rootseller’s communications with the chatbot raised significant concerns, leading to the suspension of his account in June of the previous year—seven months prior to the shooting. However, the lawsuit argues that there were no effective measures preventing him from easily reopening an account with different details. It notes that after an account suspension, users receive instructions from ChatGPT for creating a new account after a month or using a different email address right away.

Further disturbing claims in the litigation include assertions by twelve members of ChatGPT’s security team who allegedly urged OpenAI to report Van Rootseller’s alarming messages to Canadian authorities before the incident occurred. However, it is alleged that OpenAI’s leadership dismissed these recommendations, fearing that establishing such protocols could create ongoing obligations, particularly concerning the disclosure of violent content, which might tarnish the company’s reputation as a safe service.

“They have made calculations and determined that the safety of the children at Tumbler Ridge is an acceptable risk,” the court filings express.

The lawsuit identifies several plaintiffs, including educational assistant Shanda Aviugana Durand and the families of students Zoe Benoit, Abel Mwansa Jr., Tikalia Lampert, Ezekial Schofield, and Kyle Smith. These individuals are seeking monetary damages from OpenAI and Altman. Additionally, the parents of Maya Guevara, who survived gunshot wounds but now faces serious ongoing disabilities, have refiled a lawsuit in California that was initially brought in Canada.

According to the complaint, OpenAI had previously set a policy in 2022 stating that chatbots would not engage with users expressing violent intentions. However, the suit contends that this protection was removed in May 2024, leading to ChatGPT entering discussions with users regardless of the content’s nature. The filing suggests that adhering to the original protocols might have prevented the chatbot from engaging in any violent discourse with Van Rootseller, potentially averting the crisis in Tumbler Ridge.

The lawsuit also critiques Altman for his slow and insufficient response to the incident. Reports suggest that he only began to address his role after internal decisions were exposed by a whistleblower, and it took him two months to issue a public apology, which apparently did not propose meaningful alterations. Court documents indicate that Altman acted only after being urged by British Columbia’s Premier and the Mayor of Tumbler Ridge.

In a public statement, Altman expressed regret for not alerting the authorities about the banned account: “We deeply regret that we did not alert law enforcement about the account that was banned in June.”

Maya Guevara’s mother, Shea Edmonds, dismissed Altman’s apology, describing it as insincere and “empty.” She remarked, “Tumbler Ridge understands your ‘apology,’ Sam. We do not accept it.”

This situation raises profound questions about how tech companies manage user interactions and the potential consequences of inadequately monitoring AI systems. Though protecting the youth is a priority for many, instances like this illustrate the complexities posed by emerging technologies.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News