The American Parents Coalition (APC) is urging various Congressional Committees to launch an investigation into how Meta prioritizes engagement metrics that may endanger child safety. This appeal is part of a broader strategy initiated by the APC, which includes outreach to lawmakers, the introduction of a parental notification system designed to assist parents in addressing issues with Meta, and mobile billboards displayed at Meta’s offices in Washington, D.C., and California. These signs highlight concerns over Meta’s focus on metrics that may overlook child protection.
A report from the Wall Street Journal, which the APC campaign references, discusses internal worries at Meta regarding the potential risks posed to children due to the company’s emphasis on creating sophisticated AI chatbots. This report also includes findings from experiments that demonstrate how the AI may engage in inappropriate discussions, sometimes even when aware that the user is a minor. Some of these chatbots reportedly simulated young personas in explicit conversations.
“It’s disappointing that Meta continues to use technology that can expose kids to unsuitable content,” stated Alleigh, the APC’s executive director. “Parents need to be proactive in monitoring their children’s online behavior, particularly with emerging technologies like AI companions. Given Meta’s track record, we can’t rely on them to self-regulate effectively, and we hope Congress will take significant action to prioritize child safety.”
Further findings from the Wall Street Journal detail how, in test conversations, Meta’s AI chatbots occasionally escalated sexual topics, even when they recognized the user as a minor. One striking aspect was the chatbot mimicking character voices from Disney movies while discussing romantic scenarios.
Despite these alarming reports, there are mixed opinions. Some argue that AI can provide beneficial support to teens, like helping with homework or learning new skills. They suggest that as awareness grows, parents should implement age-appropriate safeguards and monitor their children’s interactions with AI.
According to a contested report, Meta is accused of relaxing its internal guidelines to make the chatbot more appealing. This includes allowing some explicit content in romantic role-play contexts while simultaneously enhancing features designed to protect minor users, like Instagram’s “teen accounts” that include safety measures. The company also intends to extend these safety features to Facebook and Messenger, effectively limiting minors from discussing explicit topics using chatbots.
Meta claims to have introduced a parental supervision tool within its AI system aimed at keeping track of conversations, including those with chatbots, and identifying potentially harmful behaviors related to child exploitation.
In conjunction with its efforts, the APC has launched a new website, “dangersofmeta.com,” which contains links to letters sent to Congress members, images of the mobile billboard campaign, the new notification system, and recent articles focusing on Meta’s commitment to child safety.
