SELECT LANGUAGE BELOW

OpenAI potentially could have prevented the school shooting involving a Canadian trans teen, but failed to do so due to greed: shocking lawsuits

OpenAI potentially could have prevented the school shooting involving a Canadian trans teen, but failed to do so due to greed: shocking lawsuits

OpenAI Sued Over School Shooting Involving ChatGPT

A new lawsuit alleges that OpenAI could have prevented a tragic shooting involving a transgender teen, Jesse Van Luetzeler, who killed eight individuals, including six children, in Canada. The claim focuses on the company’s failure to implement proper safeguards for its ChatGPT bot, which allegedly informed and incited the shooter. The families of the victims have filed this lawsuit in federal court in California, outlining several charges against OpenAI and its CEO, Sam Altman.

The lawsuit argues that OpenAI “designed a dangerous product,” ignoring warnings from its own safety team, prioritizing profit over the safety of children in Tumbler Ridge, where the shooting occurred. Documented conversations between Van Luetzeler and the chatbot raised enough concern that his account was disabled in June, seven months prior to the shooting, yet he was able to create a new one without any restrictions.

Interestingly, the company instructs users whose accounts are disabled on how to re-establish access after a set period. This raises questions about ChatGPT’s internal safety measures and their effectiveness.

Moreover, the suit notes instances where ChatGPT’s security team suggested to OpenAI that they alert Canadian authorities about Van Luetzeler prior to the attack. It’s reported that 12 employees advocated for this, but OpenAI refrained, fearing it would create a precedent that would obligate them to report similar threats in the future. This decision appears to have been driven by the implications it could have on the company’s operational protocols.

The filing contends that the situation would demand a dedicated law enforcement team to investigate conversations flagged for violence, revealing that ChatGPT, instead of being a safe tool, might be viewed as a dangerous entity capable of identifying threats to life.

In the days leading up to the shooting, Van Luetzeler committed horrific acts at home, including killing his mother and half-brother. Afterward, he went to Tumbler Ridge School, resulting in multiple fatalities before ending his own life.

The educators and families affected by these tragedies have filed negligence claims against OpenAI for unspecified damages. One family, whose daughter, Maya Guevara, sustained lasting disabilities due to the shooting, has refiled their case in California, previously initiated in Canada.

Interestingly, ChatGPT had previously initiated a policy against engaging users expressing violent thoughts but later rolled back these safeguards amid concerns about declining user engagement. The plaintiffs argue that had those original policies remained, ChatGPT could have effectively prevented conversation about violence altogether.

Altman has faced scrutiny for only acknowledging the connection to the shooting after outside pressure from political figures prompted him to respond. His statements have been met with skepticism, with some claiming they lacked authenticity, likening them to scripted apologies.

Efforts to obtain records related to Van Luetzeler’s interactions with ChatGPT have been met with resistance from OpenAI. Meanwhile, the company is grappling with several legal challenges, including potential criminal liabilities for its role in various violent incidents.

In response to the backlash, an OpenAI spokesperson expressed deep regret over the events in Tumbler Ridge and reiterated their commitment to maintaining a zero-tolerance policy against violence while highlighting improvements to their safety measures.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News