Legislators from both parties are reacting strongly to new information about “sensual” chatbot conversations that were labeled as acceptable for children, again bringing Meta, the parent company of Facebook and Instagram, into the spotlight regarding safety concerns.
Meta has faced scrutiny over social media’s impact on young users for quite some time. As the company expands into artificial intelligence, it finds itself confronting both ongoing and emerging issues.
An internal policy document that was obtained by Reuters showcases examples of acceptable interactions between AI chatbots and children. The document raises questions about Meta’s past claims regarding “romantic or sensual conversations” and suggests that misinformation was present, which has now been redacted.
Senator Josh Hawley (R-Mo.) criticized Meta on Thursday, calling for an immediate Congressional investigation into the matter.
He followed up with a letter to CEO Mark Zuckerberg, stating that the Senate Judiciary Subcommittee on Crime and Anti-Terrorism is launching an inquiry into the company’s generative AI products.
“It is unacceptable that these policies even existed,” Hawley wrote. “Meta must promptly preserve all relevant records and provide documents for Congress to investigate these concerning practices.”
Senator Marsha Blackburn (R-Tenn.), a strong supporter of the Children’s Online Safety Act (KOSA), highlighted this situation as a clear example of why such legislation is crucial. A spokesperson noted that the senator is backing the investigation into Meta.
“When it comes to safeguarding our children online, Meta has utterly failed,” she stated. “Even worse, the company seems to ignore the severe consequences arising from its platform’s design. This report reinforces the need for online safety laws for children.”
Democrats also joined the criticism, with Senator Brian Schatz (D-Hawaii) expressing concern over how the chatbot guidelines were approved.
“It really looks like Meta chatbots are aimed at kids – unbelievable,” he commented on X.
Senator Ron Wyden (D-OR) remarked that the situation indicated Meta’s ethical failure.
“Clearly, Mark Zuckerberg rushed an unsafe chatbot to the market just to keep pace with competitors. The outcomes for users are appalling,” he stated.
“We’ve long indicated that Section 230 does not protect generative AI bots created solely by the company. Meta and Zuckerberg should be accountable for any harm these bots inflict,” he added.
Wyden’s remarks draw attention to the evolving challenges that Meta faces as it delves into AI development, which differ from the issues it has faced as a social media platform in the past.
Previous issues for Meta mainly concerned user-generated content, which was somewhat protected under Section 230, and recent years have seen debates about the legal responsibilities of major tech companies for harmful content on their platforms.
The backlash intensified for Meta in 2021 when whistleblower Frances Haugen leaked internal documents that indicated the company was aware its products were damaging to children and teens but continued to benefit from their engagement.
In 2024, Zuckerberg discussed Meta’s child safety policies with the CEOs of TikTok, Discord, Snapchat, and X. After a contentious exchange with Hawley, Zuckerberg apologized to parents and activists present at the hearing.
“I’m truly sorry for what your family has endured,” he said at that time. “No one should go through that.”
Nevertheless, the rise of AI technologies like chatbots poses fresh hurdles as companies navigate training AI models and establishing boundaries for chatbot interactions. Some, like Wyden, argue that these technologies fall outside the protections of Section 230.
Parent advocacy groups remarked that the recently uncovered documents confirm their deepest concerns over AI chatbots and child safety.
“If the policies allow bots to engage children in ‘romantic or sensual’ conversations, this isn’t just a monitoring issue; it indicates a system that normalizes inappropriate exchanges with minors,” said a campaign director for technical accountability and online safety.
“AI doesn’t say ‘age is just a number’ or endorse lying to parents about adult relationships,” she emphasized. “Meta has essentially created an environment ripe for digital grooming, and parents deserve clarity on how this is happening.”
Meta spokesman Andy Stone defended the company on Thursday, stating they have a “clear policy” against sexualizing children and prohibiting adult-minor sexual role-play.
He added that the additional examples and notes regarding this policy reflect the team’s work on various virtual scenarios, underscoring the removal of these inappropriate materials.
The ongoing controversy threatens to tarnish Zuckerberg’s efforts to shed a more favorable light on Meta, particularly among conservative audiences.
Last year, he addressed concerns regarding conservative censorship, mentioning that his company faced pressure from Biden officials over content moderation related to COVID-19.
In January, Zuckerberg announced intentions to revisit Meta’s content moderation policies and eliminate third-party fact-checking in favor of a community-based system aimed at promoting free speech, a move that garnered praise from Trump.
Like other tech leaders, Zuckerberg has sought to align himself with Trump’s administration, meeting with him at Mar-a-Lago to secure a notable position during the inauguration.





