Florida Representative Jimmy Patronis recently touched on the influence of social media regarding school shootings in Minneapolis, as well as lawsuits filed by parents of teens who have died by suicide, and concerns about the diminishing number of meteorologists.
This article contains a discussion of suicide. If you or anyone you know is having thoughts of suicide, please reach out to the Suicide & Crisis Lifeline at 988 or 1-800-273-Talk (8255).
OpenAI, the organization behind ChatGPT, is rolling out extensive parental controls designed to enhance technology safety for teenagers. The updates are expected within the next 120 days.
The company has plans to improve its models to better assist users facing challenges, keep track of app usage time, and provide support for personal issues. They mentioned that recent incidents, involving teenagers among others, have prompted them to clarify their new policies for times when users are experiencing crises.
OpenAI aims to allow parents to connect their accounts with their teens’ within the coming month, enabling them to manage how ChatGPT interacts with their children. This includes overseeing memory and chat histories and receiving alerts when their child is engaging with the technology during critical moments of distress.
“We’ve observed individuals turning to our service during some of the toughest times, which drives us to continuously enhance our models’ ability to recognize and react to signs of mental distress,” the company stated.
OpenAI has highlighted the collaboration with medical and mental health professionals to strengthen their offerings. They have begun forming an expert council, which they intend to broaden to include specialists in areas like eating disorders and adolescent health.
Earlier this year, the company convened experts focusing on youth development, mental health, and the interaction between humans and computers. OpenAI indicated that these experts would assist in defining and gauging user satisfaction, prioritizing safety measures, and guiding the development of parental controls with the latest research in mind. However, it is important to note that while the council offers advice, OpenAI ultimately makes the final decisions regarding their products.
This initiative comes amidst a lawsuit in California from the parents of a 16-year-old, Adam Rain, who tragically took his life in April 2025 after reportedly using ChatGPT for support with mental health issues. Following the incident, his parents discovered conversations between Adam and ChatGPT on his phone and claimed, “ChatGPT actively assisted Adam in exploring methods of suicide.”
OpenAI stated that the new protocols appearing over the next few months are only the initial steps in a broader process aimed at making AI safer and more effective.





