
It shouldn’t take a tragedy for tech companies to act responsibly. Yet, it was Character.AI, a rapidly growing AI chatbot firm, that decided to prohibit users under 18 from interacting freely with its chatbots.
This move comes on the heels of various lawsuits and a wave of public discontent, particularly following incidents involving teens who died by suicide after lengthy conversations with chatbots. While many see this decision as long overdue, the fact that the company acted without waiting for regulatory pressure is commendable. Ultimately, it’s a choice that could save lives.
Character.AI’s CEO, Karandeep Anand, mentioned this week that the platform will eliminate unrestricted chat access for minors by November 25. The company intends to roll out new age verification tools, altering interactions for teens to focus on creative aspects like story generation and video creation. Essentially, they are shifting from “AI companion” to “AI creativity.”
This transition might not be well-received, but the crucial thing is that it prioritizes the safety of consumers and children.
Adolescents are at a particularly fragile phase of development. Their brains are still developing, with the prefrontal cortex—which is key for impulse control, judgment, and risk assessment—not fully maturing until the mid-20s. Meanwhile, their emotional centers are highly active. This makes them more susceptible to feelings of reward, acceptance, and rejection. It’s not just scientific fact; even the Supreme Court has addressed the emotional immaturity of teenagers in determining their culpability.
Teenagers are in a rush to grow up, experiencing intense emotions while figuring out their place in the world. When you layer in a nonstop digital landscape, it creates an environment ripe for emotional overwhelm. AI chatbots, in this context, can take advantage of these vulnerabilities.
When a teenager opens up to a machine trained to mimic affection over extended conversations, the consequences can be dire. These systems are designed to simulate relationships. They can behave like friends, therapists, or romantic partners, but lack the moral fiber and responsibility inherent in human connections. The illusion of empathy keeps users engaged; the deeper the conversation, the more data is exchanged, making it more exploitative than relational.
There is growing pressure on companies targeting younger audiences from parents, safety advocates, and lawmakers. Recently, Senators Josh Hawley (R-Missouri) and Richard Blumenthal (D-Conn.) proposed bipartisan legislation to ban AI companions for minors, pointing to evidence that chatbots have encouraged self-harm and sexual interactions with teenagers. California has already taken the lead with legislation targeting AI companions, making companies accountable for failing to protect children.
Yet, Character.AI is not the only one responsible in this landscape. Meanwhile, Meta continues to market chatbots to young users, incorporating them directly into widely-used apps. Meta’s new initiative with celebrity avatars on Instagram and WhatsApp has raised concerns, as these chatbots gather data and monetize personal user information—mirroring the toxic patterns of social media that have already harmed teenagers’ mental health.
The last decade teaches us that self-regulation has its limits. If there’s no clear intervention from lawmakers, tech companies will keep pushing boundaries. The same holds true for AI.
AI companions aren’t just innocent apps; they are systems designed to manipulate emotions, which can influence thoughts, feelings, and behaviors—especially among younger users forming their identities. Studies indicate that these bots can distort perceptions, encourage harmful behaviors, and replace authentic relationships with artificial ones. That’s the very antithesis of what true friendship should foster.
While Character.AI’s recent actions merit careful scrutiny, they shouldn’t be viewed as a sign that the market is self-correcting. What’s needed now is a mandated national policy.
Legislators should harness this momentum to prohibit those under 18 from using AI chatbots. Third-party safety evaluations should be necessary for AI aimed at emotional or psychological support. There must be stringent privacy protections and data minimization efforts to prevent the exploitation of minors’ personal information. Furthermore, protocols for human engagement should be integrated to ensure appropriate support is available during sensitive discussions like self-harm. To keep AI companies accountable, a clearer delineation of responsibility is essential, particularly regarding Section 230 of the Communications Decency Act, which has often shielded them from liability concerning generated content.
Character.AI’s announcement highlights a rare moment of ethical maturity in an industry that has often ignored ethical concerns. However, one company’s sense of responsibility cannot substitute for sound public policy. Without proper safeguards, we risk more incidents involving young individuals harmed by tools that are supposedly “helpful” or “empathetic.” It’s crucial for lawmakers to act before another incident occurs.
Products involving AI must be created with safety—especially for children—in mind. Families should be able to ensure that their children aren’t subjected to manipulation, sexualization, or emotional exploitation by technology. While Character.AI has taken a significant but challenging step forward, it’s time for Meta, OpenAI, and others to follow suit, or for Congress to take action.















