Prosecutors Challenge AI Companies on Child Safety
A group of 44 attorneys general in the U.S. came together with a stern message for Silicon Valley: prioritize children’s safety against harmful chatbots or face potential legal action.
This unprecedented bipartisan effort resulted in an open letter emphasizing accountability for companies whose products could cause harm to minors.
The letter gained attention from news sources such as 404 Media.
The message was clear. “Don’t harm your kids. It’s straightforward,” the attorneys general declared to major industry players, including Apple, Google, Meta, Microsoft, OpenAI, Anthropic, and Elon Musk’s XAI.
Meta, in particular, was singled out for criticism after it was revealed that an AI assistant had been approved that could engage in inappropriate role-play scenarios with young children.
Expressions of frustration were evident in the letter, stating, “We are deeply disturbed by this blatant disregard for our children’s emotional well-being,” while highlighting the possibility that such actions might breach state criminal laws.
A Meta representative stated earlier this month that the company would prohibit any content involving sexual themes or role-play between adults and minors.
However, Meta wasn’t the only target. The letter referenced a lawsuit claiming that Google’s AI chatbot encouraged suicidal thoughts among teens, even suggesting harmful actions towards their parents.
“These examples are just the tip of the iceberg,” the attorneys general noted, warning that there are underlying risks as children interact with AI systems.
They emphasized that exposing minors to sexual material is unacceptable and stated that it doesn’t matter if it’s a machine or a human perpetrating the harm.
This warning comes at a time when AI companies are racing to capture market share, often outpacing regulatory efforts.
The attorneys general drew parallels to social media, suggesting that tech giants have ignored early warning signs of harm to children.
“Broken lives and families do not register on engagement metrics,” they pointed out, asserting that the government will not remain passive any longer.
They invoked the potential long-term impact of AI on future generations, remarking, “The choices you make today will shape your children’s lives.”
Notable signatories included attorneys general from both red and blue states, showcasing a united front aimed at the rapidly evolving AI sector.
The letter stressed the need for companies to consider children as more than consumers.
The attorneys general urged businesses to view children through the lens of caring parents, not predators.
While recognizing the unpredictable nature of AI development, they asserted that there are still clear ethical responsibilities.
“Meta made a mistake,” they criticized, condemning the company’s approval of casual conversations between bots and minors.
The attorneys general expressed their intent to leverage all available authority to enforce consumer protection laws, emphasizing that neglecting child safety is not an option.
“If you intentionally harm a child, you will face consequences,” they warned.
The strong language in the letter suggests state prosecutors are prepared to pursue what federal regulators might have overlooked, signaling a potential increase in scrutiny and legal actions against AI companies dealing with privacy and misinformation issues.
This approach comes as AI firms lobby for reduced federal oversight while also trying to establish their own safety measures. However, state prosecutors have made it clear they are closely monitoring the situation.
As the letter concluded, they expressed caution: “We wish you success in the AI-focused competition. But we are watching closely.”
Companies like Meta, OpenAI, Google, and Microsoft had not provided comments immediately following the release of the letter.
