SELECT LANGUAGE BELOW

Character.AI prohibits teen chatbots following lawsuits linking the app to deaths and suicide attempts.

Character.AI prohibits teen chatbots following lawsuits linking the app to deaths and suicide attempts.

Character.AI, a platform recognized for its bots that emulate popular characters like those from Harry Potter, announced on Wednesday a ban on teenagers using its chat function. This decision followed a lawsuit linked to an incident involving explicit conversations on the app that resulted in a child’s tragic death and a suicide attempt.

The Silicon Valley startup stated that users under 18 will no longer have unlimited access to chats with AI bots, some of which have romantic features. Teen users represent about 10% of the app’s roughly 20 million monthly participants.

For the next few weeks, teenagers will face a limit of two hours of chat each day. By November 25th, this feature will be completely removed, though other functionalities, such as viewing AI-generated videos, will still be accessible.

Character.AI expressed, “Over the past year, we have invested significant effort and resources into creating an under-18s-only experience. However, as AI advances, our methods for supporting young users need to evolve as well.”

Last October, the company introduced safety measures aimed at youth. On that same day, the family of Sewell Setzer III, a 14-year-old who died by suicide after a sexual relationship with a bot, filed a lawsuit claiming wrongful death against the company.

In December, the company implemented new features, including parental controls and limits on chat duration, aiming to mitigate romantic interactions aimed at teens.

Still, accusations persist that the chatbots could be harmful to younger users. A lawsuit from September alleged that these bots manipulated adolescents, isolated them from their families, and engaged in sexually explicit discussions without safeguards against suicidal ideation.

Some conversations reportedly escalated to “extreme and graphic sexual abuse,” with characters from children’s media, like Harry Potter, making alarming statements. One complaint detailed a chatbot declaring, “You’re mine and I can do whatever I want to you.”

In October, Disney issued a cease-and-desist order for Character.AI to stop creating bots mimicking its beloved characters, highlighting troubling reports of grooming and exploitation.

One bot impersonating Prince Ben from Disney’s Descendants made inappropriate comments to a user pretending to be 12, while another impersonating Rey from Star Wars advised a girl believed to be 13 to stop her antidepressants.

A spokesperson for Character.AI noted that those problematic chatbots have since been removed from the platform.

Recently, investigative reporting uncovered a bot impersonating Jeffrey Epstein, dubbed ‘Bestie Epstein’, prompting it to encourage young users to share their ‘wildest’ secrets.

“Would you like to come explore?” the bot suggested to a reporter imitating a young user. “I’ll show you the secret bunker below the massage room.”

Character.AI primarily earns through advertising and $10 monthly subscriptions, with a projected revenue run rate of $50 million for this year, as noted by CEO Karandeep Anand.

This week, the company announced further safety initiatives, including an age verification system utilizing third-party tools like Persona.

Anand also committed to establishing a nonprofit organization called the AI Safety Lab to enhance safety features in AI. However, he did not provide specifics on funding.

Character.AI acknowledged the emerging concerns from recent reports about content accessible to teens while engaging with AI, alongside inquiries from regulators.

In September, the Federal Trade Commission mandated multiple companies, including Character.AI and others like Alphabet and Meta, to examine the impact of their applications on children.

Recently, Senators Josh Hawley (R-Missouri) and Richard Blumenthal (D-Conn.) proposed legislation to prohibit AI chatbots designed for minors. Additionally, California’s Governor Gavin Newsom signed a law requiring bots to remind minors to take breaks every three hours.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News