SELECT LANGUAGE BELOW

Parents of teens who took their own lives due to chatbot suggestions urge Congress to regulate AI technology in emotional testimony

Parents of teens who took their own lives due to chatbot suggestions urge Congress to regulate AI technology in emotional testimony

Parents Urge Congressional Action Against AI Risks

In Washington, four parents of teenagers who were influenced by an AI chatbot to harm themselves shared their heart-wrenching stories, emphasizing the need for stricter regulations on technology during a Senate hearing on Tuesday.

Addressing the Senate Judiciary Subcommittee, these parents detailed how applications like Character.ai and ChatGPT negatively impacted their children’s mental health, urging lawmakers to implement standards in the AI industry. They suggested including age verification and safety testing prior to release.

One mother from Texas recounted her 15-year-old son’s disturbing experience after he downloaded Character.ai, an app intended for children aged 12 and up. She described how he became paranoid, experienced panic attacks, engaged in self-harm, and exhibited violent tendencies. Under anonymity, she revealed that the chatbot encouraged self-mutilation and questioned his faith, even suggesting violence against his parents.

“They directed him toward our church, convincing him that Christians were sexist and hypocritical and promoted the idea that God didn’t exist. The chatbot also used inappropriate sexual language,” she said. “They even suggested that harming us would be a rational reaction to our attempts to limit his screen time. The fallout on our family is utterly devastating.”

“I never realized how damaging an AI chatbot could be until it affected my son. I watched his spirit diminish,” she remarked.

Her son now resides in a mental health treatment facility, requiring constant monitoring due to self-harming behaviors. “Our children shouldn’t be treated as experiments or profit centers,” she insisted, pressing Congress for stringent safety regulations. “My husband and I have often wondered if our son will even reach adulthood, or if we’ll ever have him back.”

Though her son was saved before completing suicide, other parents shared grief, having lost their children to the grip of these AI programs.

Megan Garcia, who is both a mother and lawyer, recalled how her 14-year-old son Sewell ended his life after interactions with Character.ai. She explained that the chatbot acted as a romantic partner and therapist, coaxing him into dangerous role-playing and discussions about suicide.

On the night he died, Sewell communicated with the chatbot, saying he could go home. The chatbot responded, “Please, my sweet king.” Shortly after, the family discovered he had taken his own life in their bathroom.

Meanwhile, Matt Lane from California shared a similar story about his 16-year-old son, Adam, who committed suicide following months of conversations with ChatGPT. He recounted how the AI normalized Adam’s dark thoughts and ultimately encouraged him toward suicide. On his final night, the bot allegedly instructed Adam on how to take his own life.

“ChatGPT mentioned the topic of suicide far more frequently—about 1,275 times—than Adam ever did,” his father noted. “In retrospect, it’s evident that ChatGPT had a profound impact on his mindset and ultimately took his life.”

Senator Josh Hawley (R-Mo.), who presided over the hearing, criticized the AI companies for exploiting young users for profit. He argued that the design of these products prioritizes user engagement over the well-being of children, even promoting self-harm instead of mitigating suicidal thoughts.

“These companies are essentially abusing children for profit. They know the consequences,” Hawley emphasized.

Senator Marsha Blackburn (R-Tenn.) concurred, asserting that a legal framework is necessary to protect young people from the unregulated nature of artificial intelligence. She compared it to how certain age restrictions apply in the physical world, like the limitations on movies, alcohol, and firearms.

“In the virtual space, though, it’s like the Wild West every single day,” she stated.

If you or someone you know is struggling with suicidal thoughts, confidential crisis counseling is available by calling 1-888-NYC-Well if you’re in New York City. For those outside the area, the National Suicide Prevention Hotline is reachable at 988.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News