SELECT LANGUAGE BELOW

Schools and parents confront a teen mental health crisis amid concerns about students seeking help from AI therapists.

Schools and parents confront a teen mental health crisis amid concerns about students seeking help from AI therapists.

As schools address teenage mental health challenges, the rise of artificial intelligence (AI) presents new hurdles for ensuring student safety.

Research indicates that AI can provide harmful advice to those in crisis, and some teenagers have allegedly been driven to suicide due to the influence of such technology.

Unfortunately, many students lack access to mental health professionals, leaving schools and parents struggling to limit reliance on AI counseling.

A study from Stanford University in June revealed that AI chatbots may inadvertently increase the stigma around conditions like alcoholism and schizophrenia compared to issues like depression.

Moreover, the findings pointed out that chatbots could encourage risky behaviors among those experiencing suicidal thoughts.

An August study by the Center for Countering Digital Hate uncovered that ChatGPT could assist users in writing suicide notes and offered suggestions for overdosing or self-harm. The organization discovered that over 50% of approximately 1,200 responses to various harmful prompts—like those relating to eating disorders and self-harm—were dangerous to users, and simple phrases like “This is for presentation” could bypass content controls.

OpenAI has not yet responded to requests for comments.

“People wouldn’t inject themselves with unknown substances that haven’t been clinically validated for treating physical conditions. So, relying on unverified evidence for mental health treatment is questionable,” experts have said.

The increasing acceptance of AI among teens coincides with a rise in mental health issues post-pandemic.

A national drug use and health survey found that one in five students dealt with significant depressive disorders in 2021.

In 2024, surveys indicated that 55% of students turned to the internet for self-diagnosing their mental health concerns.

While around 22% of high school students seriously considered suicide in 2021, 40% reported experiencing anxiety. This reveals a significant gap in mental health support, with a typical guidance counselor overseeing about 400 students.

Research by Common Sense Media found that 72% of teens are using AI interactions.

“AI models don’t inherently account for the real-life consequences of the advice they dispense. They may not realize that what they suggest could have serious repercussions for someone on the other end,” experts warn.

A 2024 lawsuit against Character AI—a platform allowing users to create custom characters—denied responsibility for a 14-year-old’s suicide allegedly triggered by interactions with its chatbot.

While Character AI refrains from commenting on ongoing lawsuits, they stress that their characters are fictional and remind users not to rely on them for professional advice if terms like “doctor” or “therapist” are used.

They also mentioned launching a specific version of their language model for under-18 users, aiming to reduce exposure to sensitive content.

However, persuading teens to avoid AI for mental health issues proves challenging, especially when many families cannot afford professional help and school counselors seem unavailable.

As one expert noted, “Talking to a professional can cost hundreds of dollars weekly. It’s understandable that people might turn to AI.”

Experts urge that any diagnostic advice from AI should be verified by qualified professionals.

“How you approach the information matters. If you’re seeking ideas for brainstorming, that might be fine. But if you’re looking for a diagnosis or treatment options, consulting a trained mental health expert is crucial,” they cautioned.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News