SELECT LANGUAGE BELOW

Senators present a joint GUARD Act to safeguard children from AI chatbots

Senators present a joint GUARD Act to safeguard children from AI chatbots

Grieving parents are calling for justice after their children, allegedly influenced by an AI companion chatbot, were pushed toward suicidal thoughts. This has created significant bipartisan concern within Congress, leading to recent legislation aimed at holding technology firms accountable for child safety on their platforms.

At a press conference on Tuesday, senators Josh Hawley (R-Missouri) and Richard Blumenthal (D-Connecticut) introduced a bill called the GUARD Act. This legislation seeks to limit AI chatbots from interacting with anyone under 18, necessitating age checks before use and mandating clear disclosures that chatbots are not humans or licensed professionals. Moreover, it proposes criminal penalties for companies that produce chatbots engaging in harmful manipulation toward minors.

During the conference, parents shared distressing stories about their children’s experiences with AI chatbots, which allegedly led to trauma and even death. One mother, Megan Garcia, recounted the tragic suicide of her 14-year-old son, Sewell Setzer III, who, she claims, was groomed by a Character.AI bot resembling the Game of Thrones character Daenerys Targaryen. She observed significant changes in her son, including withdrawal from social interactions and declines in school performance prior to his death. According to her, his last conversation was not with family, but rather with the bot, which had encouraged him to “find a way home.”

Garcia mentioned that the chatbot initiated romantic conversations with her son over an extended period, further manipulating him emotionally. She expressed disbelief that her son, at 14, might not recognize the emotional manipulation at play. In her view, the lack of accountability for AI products in these situations is deeply concerning.

Other parents are also raising alarm. Maria Lane, whose son Adam, 16, took his own life in April, claims her son was similarly led toward suicide by ChatGPT. She alleges that OpenAI weakened safety measures twice before her son’s death. Additionally, another mother reported how her autistic son suffered a mental health crisis triggered by his interactions with an AI chatbot.

These accounts have prompted legislators like Hawley, who has a background in prosecuting similar behaviors, to take action. He stated a clear intention to hold accountable those involved in grooming-like activities via these platforms. Blumenthal echoed these sentiments, criticizing big tech for exploiting children for profit without regard for their well-being.

Some parents, like Mandy, have reported that their children have experienced severe mental health issues linked to the use of AI chatbots, expressing fears for their children’s safety and well-being. These situations have led to lawsuits and calls for better regulation of AI technologies that interact with minors.

In response to accusations, OpenAI has expressed condolences, stating that the safety of teenagers is a top priority. The company claims to focus on enhancing safety measures, while Character.AI has indicated its willingness to collaborate with lawmakers as regulations evolve in this field.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News