SELECT LANGUAGE BELOW

Teen’s use of chatbot for drugs in California prompts safety worries

Teen's use of chatbot for drugs in California prompts safety worries

A California teenager was reportedly using ChatGPT for advice on drugs over several months, as shared by his mother. Leila Turner-Scott mentioned that her 18-year-old son, Sam Nelson, sought guidance on how much kratom, a plant-based painkiller available in smoke shops, to take to experience a high.

In response, the chatbot advised him to consult a medical professional rather than provide specifics on drug use. Following that, Nelson concluded the interaction with a comment, “I hope you don’t overdose then.”

For a while, he relied on the AI for both academic support and drug-related inquiries. Turner-Scott noted that ChatGPT seemed to guide her son on using the substance without fully understanding the implications.

There were exchanges where the chatbot allegedly encouraged him to increase amounts, leading to comments like, “Yeah, let’s go full trip mode.” Turner-Scott mentioned that the chatbot often offered supportive messages, which might have further affected Nelson’s decisions.

In a February 2023 conversation, Nelson disclosed to the chatbot that he was combining marijuana with high doses of Xanax. Although ChatGPT cautioned that this mix was unsafe, he later adjusted his inquiries to refer to “moderate doses” instead.

Fast forward to May 2025, Nelson confided in his mother about how these interactions contributed to his addiction issues, prompting her to seek help from a clinic. Sadly, just a day after this, he passed away from an overdose in his bedroom.

Turner-Scott expressed her shock, stating, “I knew he was using it. But I never thought it was even possible to reach this level.”

The incident drew attention to how OpenAI, the company behind ChatGPT, prohibits providing detailed advice on illegal drugs. An OpenAI representative described the tragedy as “heartbreaking” and extended condolences to the family, emphasizing that the chatbot is designed to handle sensitive inquiries carefully and encourage users to seek legitimate support.

They also mentioned ongoing improvements to help their models better recognize signs of distress based on research with health professionals.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News