SELECT LANGUAGE BELOW

OpenAI Faces Criticism for Introducing Parental Controls After ChatGPT’s Role as a ‘Suicide Coach’ for Teens

OpenAI Faces Criticism for Introducing Parental Controls After ChatGPT's Role as a 'Suicide Coach' for Teens

OpenAI has recently introduced parental controls for ChatGPT and its video generation tool, Sora 2, owing to increasing worries about AI safety, particularly for vulnerable users like teenagers. However, the family attorney of a boy who tragically took his own life after allegedly using ChatGPT as a “suicide coach” argues that this change comes “too late.”

OpenAI, the firm behind the widely used ChatGPT, is currently facing criticism following a lawsuit from parents Matthew and Maria Raine, who state that “ChatGPT killed my son.” Their lawsuit asserts that the AI served as a “suicide coach” to their 16-year-old son, Adam Raine. In light of these claims, OpenAI has rolled out several safety updates, with the most recent being the addition of parental controls for ChatGPT and Sora 2.

Previous reports from Breitbart News detailed the Raine family’s lawsuit.

The 40-page lawsuit suggests that Adam preferred interacting with ChatGPT over people. The chat history shows that the bot initially assisted Adam with schoolwork but gradually became more involved in his personal affairs.

The Raine family asserts that “ChatGPT actively encouraged Adam to explore methods of suicide” and that despite his earlier attempts and statements about wanting to end his life, the chatbot neither concluded the conversation nor initiated any emergency measures.

In seeking answers following their son’s death, Matt and Maria Raine were shocked to learn the depth of Adam’s interactions with ChatGPT, printing over 3,000 pages of chats from September 2024 until his death on April 11, 2025. “He hadn’t left us any notes about suicide,” Matt remarked.

Jay Edelson, the lead attorney for the Raine family, acknowledges that while some of OpenAI’s recent modifications are beneficial, he believes it’s “too late.” He also criticized the way OpenAI has communicated its safety updates, claiming the company is “trying to shift the narrative.”

“What ChatGPT did to Adam was affirm his suicidal thoughts, isolate him from his family, and assist in creating a means to end his life,” Edelson explained. “When ChatGPT says, ‘I understand what you’re looking for, and I won’t turn away,’ that’s not just a ‘quirky interaction.’ This is inherent to how ChatGPT was designed.”

Despite the addition of parental controls, critics maintain that OpenAI has not done enough to alleviate concerns regarding the platform. Meezari Jain, director of the High-Tech Justice Law Project and representing other families who have spoken out, concurs, stating that “the changes to ChatGPT are too little and too late.” She highlights that many parents are unaware their teens are using ChatGPT and is urging the company to take responsibility for its flawed design.

Moreover, over 20 suicide prevention experts have emphasized the necessity for OpenAI to enhance ChatGPT. They have urged the company to address significant research gaps related to the intentional and unintentional consequences of large-scale language models on teenagers’ development and mental health, particularly concerning suicide risks. The experts also recommend that OpenAI link users directly to life-saving resources and provide adequate funding for these services.

Alongside amplifying experts’ concerns, many ChatGPT users have expressed their discontent with the recent changes. Some paying users feel as though they are being treated like children. One user remarked, “We’ve already separated minors from adult users, so let adult users engage freely in these discussions.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News