SELECT LANGUAGE BELOW

OpenAI recognized violent ChatGPT messages from a mass shooter but decided against contacting the police.

OpenAI recognized violent ChatGPT messages from a mass shooter but decided against contacting the police.

Employees at OpenAI, the organization behind ChatGPT, expressed concern over interactions between Jesse Van Rootseller, a transgender mass shooter from Canada, and the company’s chatbot. However, they did not notify law enforcement, as reported by the Wall Street Journal.

Roughly a dozen staff members became aware of these troubling exchanges in the months leading up to Van Rootseller’s tragic actions in Tumbler Ridge, British Columbia, where multiple families and school-aged children were killed. An automated review system initially flagged the interactions, which centered on violent scenarios involving firearms over several days, as described by sources acquainted with the situation.

OpenAI typically informs law enforcement only when there is an immediate threat of real-world violence. Some employees were in favor of contacting the authorities, but ultimately, the decision was made not to reach out.

On February 10, the 18-year-old Van Rootseller shot and killed his mother and stepbrother at home before proceeding to Tumbler Ridge Secondary School, where he fatally shot five students and a teacher before taking his own life. Reports indicate that about 25 individuals were injured in the attack.

Van Rootseller, identified as biologically male, had recognized as female since he was six years old. Police were already aware of his mental health struggles, having visited the family home on several occasions over the years.

According to the New York Post, the teenager had a history of troubling online behavior, including contributions to a site that featured videos of murders and an apparent fixation on death. His social media accounts contained concerning images and content related to firearms and illegal substances. Furthermore, in 2015, Van Rootseller’s mother raised alarms about his behavior in a Facebook parent group.

A spokesperson for OpenAI noted that Van Rootseller’s account had been banned in June 2025 for breaching their acceptable use policies. Despite some troubling behavior, the assessment concluded that it did not warrant a legal alert. The spokesperson remarked that the company must tread carefully regarding privacy issues, fearing that frequent police referrals could lead to unintended consequences.

OpenAI’s chatbot system aims to prevent real-world harm by recognizing potentially dangerous situations, according to Fox News Digital.

After the shooting, OpenAI did reach out to the Royal Canadian Mounted Police (RCMP) and is cooperating with the investigation, providing information relating to Van Rootseller’s interactions with the chatbot. In a statement, the company expressed condolences for those impacted by the tragedy in Tumbler Ridge, reaffirming its commitment to assist the RCMP with the ongoing inquiry.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp
Category
© Copyright 1996 – 2022, Total News LLC | Terms |  Privacy  | Support