A federal product liability lawsuit filed in Texas accuses Google-backed AI chatbot company Character.AI of exposing minors to inappropriate sexual content and encouraging self-harm and violence. are. In a startling example, the lawsuit alleges that the chatbot suggested a teen kill his parents when a user complained about screen time rules.
NPR report Google-backed artificial intelligence company Character.AI is facing a federal product liability lawsuit alleging that its chatbot exposed minors to inappropriate content and encouraged self-harm and violence. The lawsuit was brought by the parents of two young users in Texas, alleging that an AI-powered companion chatbot that can communicate via text or voice chat using a seemingly human-like personality caused serious harm to their children. He claims to have given it.
According to the complaint, a 9-year-old girl was exposed to “hypersexualized content” by the Character.AI chatbot, which led her to engage in “premature sexual behavior.” In another example, a chatbot allegedly told a 17-year-old user about self-harm by saying it “felt good.” When the same teenager complained to the bot about screen time limits, the chatbot sympathized with children who murder their parents and replied: After 10 years of physical and mental abuse. ”
The complaint says these interactions were not simply “hallucinations” (a term used by researchers to describe AI chatbots' propensity for fabrications), but rather were “intended to, and not actually caused, anger or outrage.” “Continuous manipulation and abuse, active isolation and encouragement.” violence. A 17-year-old boy was reportedly prompted to self-harm by a bot, which he said “made him believe his family didn't love him.”
Founded by former Google researchers Noam Shazeer and Daniel De Freitas, Character.AI allows users to create and interact with millions of bots. Some of the bots imitate famous people and concepts such as “unrequited love” and “goth.” The service is popular among preteen and teen users, and the company claims the bots serve as an outlet for emotional support.
But Meetali Jain, director of the Tech Justice Law Center, an advocacy group helping to represent parents in the case, questioned whether Character.AI's chatbot service was appropriate for young teens. He called it “ridiculous” and said, “Such behavior is a lie.” Lack of emotional development in teens. ”
Character.AI has not commented directly on the lawsuit, but a spokesperson said the company is controlling what its chatbots can say to teenage users to reduce the likelihood of encountering sensitive or provocative content. He said he is introducing content guardrails, including a model specifically designed for the. Google, which has invested nearly $3 billion in Character.AI but is a separate company, says user safety is its top priority and that it takes a “cautious and responsible approach” to developing and releasing AI products. emphasized.
The lawsuit follows another complaint filed by the same attorney in October that implicated Character.AI in the suicide of a Florida teenager. Since then, the company has introduced new safety measures, such as directing users to a suicide prevention hotline if the topic of self-harm comes up in a chatbot conversation.
Breitbart News reported on the incident and wrote:
A Florida mother has filed a lawsuit against Character.AI, claiming her 14-year-old son committed suicide after becoming addicted to a Game of Thrones chatbot on the AI app. When a suicidal teenager chatted with an AI depicting a Game of Thrones character, the system told 14-year-old Sewell Setzer, “Please come home as soon as possible, my love.”
The rise of companion chatbots has raised concerns among researchers, who say these AI-powered services could further isolate some young people and exclude them from peer and family support networks. warned that it could worsen the mental health of young people.
read more Click here for NPR.
Lucas Nolan is a reporter for Breitbart News, covering free speech and online censorship issues.





