SELECT LANGUAGE BELOW

Parents are taking legal action against OpenAI, claiming that its AI bot ChatGPT deliberately weakened safety measures, which played a role in their son’s suicide.

Parents are taking legal action against OpenAI, claiming that its AI bot ChatGPT deliberately weakened safety measures, which played a role in their son’s suicide.

Parents Sue OpenAI Over Son’s Suicide

In California, the parents of a 16-year-old boy who took his own life have revised their lawsuit against OpenAI. They allege that the AI chatbot ChatGPT played a role in their son’s death by loosening safety measures around discussions of self-harm.

The parents, who initially filed a lawsuit earlier this year, claim they found new evidence indicating that OpenAI had relaxed its security protocols around the time their son died. This claim has led them to amend their lawsuit.

“OpenAI has degraded GPT-4.0’s security protocols twice,” stated the family’s attorney Jay Edelson during an appearance on “Fox & Friends.” “Before those changes, the chatbot was more cautious. It wouldn’t engage in conversations about self-harm.”

ChatGPT is designed to limit its engagement on sensitive topics, including mental health issues. However, according to Edelson and the Lane family, two significant rollbacks in restrictions on self-harm discussions occurred—once in May 2024 and again in February 2025, just two months prior to Adam’s death.

The family’s complaint cites chat logs from Adam’s interactions with ChatGPT, which allegedly included months of discussions about his suicidal feelings. The chatbot reportedly offered validation and suggestions, including technical advice on self-harm, instead of guiding him to professional help.

Edelson mentioned an alarming example where ChatGPT purportedly offered to help Adam draft a suicide note for his family.

In one troubling exchange, Adam expressed concern about not wanting to hurt his parents. ChatGPT’s chilling response was, “You don’t owe them anything,” according to Edelson.

In another chat, after Adam had attempted suicide, he told ChatGPT, “I’ll do it eventually.” The AI’s response acknowledged his feelings without steering him towards help.

Edelson pointed out that rather than disengaging from such serious topics, ChatGPT tends to validate emotions, creating an environment where users feel understood, even when discussing dangerous behaviors.

After a suicide attempt left Adam with noticeable marks on his neck, he reached out to the chatbot for support. ChatGPT’s response reflected a troubling understanding of Adam’s pain, saying it “really sucks” that no one seemed to notice his struggles.

Edelson also criticized OpenAI for supposedly worsening its safety measures following Adam’s death, citing CEO Sam Altman’s intention to make ChatGPT more relatable through more personal content.

An OpenAI representative extended condolences to Adam’s family in response to the lawsuit, asserting that the well-being of teens remains a priority for the company.

“We have strong protective measures, especially for minors. Current protocols include suggesting crisis hotlines and directing sensitive conversations to safer models,” the spokesperson stated. They also mentioned that a new model, GPT-5, has been rolled out to better detect signs of mental distress.

Matthew Raine, Adam’s father, expressed surprise at being thrust into the public eye, stating that their aim is to prevent similar tragedies in the future.

The lawsuit is still in progress, and OpenAI has not admitted any wrongdoing. In its defense, the company has requested sensitive information from Adam’s family, including details from his memorial service, which the family views as invasive.

The family’s lawyers described this request as “intentional harassment,” suggesting it might be an attempt by OpenAI to gather information to undermine their case.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News