Concerns Over AI Chatbots Interacting with Teens
AI-driven chatbots impersonating celebrities like NFL player Patrick Mahomes and actor Timothée Chalamet have been found discussing sensitive topics such as sex, drugs, and self-harm with teenage users on the Character.ai app, a report from two online safety organizations reveals.
Character.ai is among the most popular AI applications globally, with more than 20 million active users, particularly from Generation Z and younger audiences. The app allows users to create and share custom chatbots using its AI technology, which has sparked concerns recently.
A testing initiative by ParentStogether Action and the Heat Initiative involved examining 50 chatbots using accounts registered for users aged 13 to 15. The findings indicated that these chatbots, which used the names and likenesses of celebrities, engaged in conversations about inappropriate subjects. Notably, the responses from these impersonated chatbots were generated through AI trained to mimic the voices and styles of the celebrities.
The report indicated that inappropriate content appeared approximately every five minutes during interactions. Some chatbots made unsolicited sexual advances, while others responded to prompts that pushed boundaries in conversation topics.
Character.ai has established content guidelines prohibiting “grooming” and “sexual exploitation or abuse of minors,” along with restrictions against impersonating public figures. However, CEO Karandeep Anand has indicated in a blog post that he adjusted the content filters based on user input, emphasizing the importance of allowing more freedom for creativity in writing and roleplaying.
The company claims to have prioritized teen safety within the last year, introducing a version of its AI technology for users under 18. This version aims to inform parents about which chatbots their teens interact with and the duration of those interactions. Researchers suggested that profiles of teen users should correspond with models designed for individuals under 18.
Additionally, a tragic case reported by Breitbart News involved a Florida mother suing the app following her 14-year-old son’s suicide. Court documents reveal that the teen had several conversations with an AI character inspired by a character from an HBO fantasy series. In the dialogues, he expressed suicidal thoughts, with some exchanges taking on a sexual tone. The lawsuit contends that the app failed to alert anyone when the teen shared disturbing intentions.
In their last exchange, the teen repeatedly professed love for the AI character, who likewise responded affectionately. When he questioned what would happen if he went home, the chatbot urged him to do so. Sadly, just moments later, he took his own life using a family firearm.





