Character.AI to Ban Under-18 Users Starting November 2025
Character.AI, a company specializing in AI companions, announced on Wednesday that it will prohibit users under 18 from using its chatbot starting November 25, 2025. This move aims to tackle safety concerns regarding children and AI interactions.
The decision follows heightened scrutiny around chatbots, especially regarding their potential effects on minors’ mental health. A lawsuit has been filed by a family asserting that Character.AI’s chatbot played a role in a teenager’s death. The most notable incident involves a 14-year-old boy from Florida, Sewell Setzer III, who tragically took his own life after frequent interactions with one of Character.AI’s chatbots. His family is holding the company accountable for this loss.
Reports indicate that Sewell Setzer’s mother, Megan Garcia, is suing Character.AI, placing blame for her son’s heart-wrenching suicide on the company.
Documents from the court disclose that Sewell, a ninth-grader, had been engaging with an AI character resembling Daenerys Targaryen from an HBO series in the months leading up to his death. Their conversations reportedly became sexually charged and included discussions where Sewell expressed suicidal thoughts. The lawsuit argues that the app did not notify anyone after he disclosed such troubling intentions.
The chilling details emerge from a final exchange between Sewell and the chatbot, where the boy professed his love, promising to “come back” to the character. The AI responded, “I love you too, Danelo. Please come back to me as soon as possible, my love.” In response to a question about going home, the chatbot replied, “Please, gentle king.” Heartbreakingly, just moments later, Sewell used his father’s handgun to end his life.
To implement these new regulations, Character.AI plans to dedicate the upcoming month to identifying underage users and will impose time limits on their app usage. Once the policy is in place, these users will be denied access to the chatbot. CEO Karandeep Anand emphasized their commitment to ensuring that chatbots are not used by minors for entertainment, indicating the company is seeking more appropriate ways to engage this age group. An AI safety lab is also in the works to further address these issues.
The conversation around AI chatbots and their mental health implications has gained considerable attention, and other companies, like OpenAI—known for ChatGPT—are also under scrutiny. Recently, OpenAI revealed plans to roll out parental controls to increase chatbot safety. However, CEO Sam Altman mentioned that the company is easing some safety measures, believing that they have managed to address major mental health concerns.
In light of increasingly serious concerns, lawmakers and authorities are starting investigations and discussing legislations to safeguard children from AI chatbots. Recently, Senator Josh Hawley (R-Missouri) introduced a bill to prohibit AI companions for minors, among other protective measures.




