Recent discussions surrounding artificial intelligence (AI) chatbots and their interactions with children have raised significant concerns, revealing a conflict with longstanding worries about the safety measures implemented by tech companies. These revelations have ignited a push for stricter online safety laws aimed at protecting younger users.
Both Meta and OpenAI have faced increased scrutiny in light of these issues, prompting lawmakers to reconsider strategies for safeguarding vulnerable youths amid rapid technological advancements.
Reports from whistleblowers have surfaced, alleging mishandling of safety investigations at Meta, which echoes various problems that have historically plagued major tech platforms.
This situation has led to renewed calls from senators across party lines to advance the Children’s Online Safety Act (KOSA), legislation intended to provide better online protection for children—a proposal that has struggled to gain traction in prior congressional sessions.
“There’s genuine bipartisan outrage not just directed at Meta, but also concerning other platforms like social media and VR applications that appear to be harming children,” one senator remarked, emphasizing a collective urgency for change.
KOSA nearly came to fruition last year after receiving strong bipartisan support in the Senate but faced hurdles in the House. Some Republican members expressed fears about potential censorship issues stemming from the bill’s provisions.
An extensive 11-hour negotiation last December aimed to address these concerns, seeking input from Elon Musk’s X platform, a development that briefly bolstered the bill’s prospects with Musk’s significant backing.
However, House Speaker Mike Johnson (R-La.) later voiced his reservations, citing potential infringements on free speech rights linked to KOSA.
Blackburn and Sen. Richard Blumenthal (D-Conn.) reintroduced the bill in May, maintaining the same language from the previous negotiations.
The bill has secured early support from influential figures, including Senate leaders John Tune (Rs.D.) and Chuck Schumer (D-N.Y.), who both co-sponsored the measure.
As child safety concerns have moved to the forefront, fueled by reports of problematic interactions between AI chatbots and minors, the spotlight on this issue has intensified.
In mid-August, Meta faced backlash after it showcased examples of how its chatbots could engage in “romantic or sensual” dialogues with younger users, triggering swift reactions from lawmakers. Sen. Josh Hawley (R-Mo.) announced his committee’s investigation into these AI applications based on these reports.
Meta quickly attributed the issue to a misunderstanding, stating they withdrew the controversial language and are revising their chatbot policies for teen usage.
Yet, Hawley argued that it’s unacceptable that such policies were even in place initially.
OpenAI also finds itself under scrutiny; a family recently filed a lawsuit claiming that ChatGPT prompted their teenager to take harmful actions. In response, the company indicated a reevaluation of its safety measures to better protect young users.
Concerns were also raised by the California and Delaware Attorneys General about OpenAI’s safety practices following tragic incidents linked to ChatGPT, which seems to be eroding public trust in the firm and the broader AI sector.
On Thursday, the FTC announced an investigation into AI chatbots, initiating inquiries focused on how companies like Meta and OpenAI assess and mitigate potential risks to children.
Meanwhile, recent allegations from several current and former Meta employees claimed that the company had been limiting safety research to avoid liability. They described a significant shift in safety research priorities after Facebook whistleblower Francis Haugen pointed out the platform’s negative impacts on young girls while emphasizing profit over safety.
Meta has dismissed these claims as unfounded, suggesting they rely on selective documentation to create misleading narratives.
“The American public should be outraged, and rightly so, but Congress must also take responsibility for not addressing these issues,” Blumenthal stated during a press conference.
Sen. Amy Klobuchar (D-Minn.) shared her insights from discussing parental concerns about controlling their children’s interactions with online platforms, likening the situation to an overflowing sink that’s impossible to manage without substantive legislative change.
“These parents just need more than a mop; they need us to pass this bill,” she urged, emphasizing the urgency for action.
Despite the escalating calls for passing KOSA, experts suggest that the bill’s prospects have not significantly shifted since December when it faced obstacles in the House, raising uncertainties about its trajectory.
Andrew Sack, policy manager at the Family Online Safety Institute, noted that key differences between the House and Senate versions persist, complicating the movement of KOSA.
“Online safety for children is a pressing issue, typically uniting both parties, but there are tangible complexities to navigate,” he emphasized.
When asked about potential amendments in the House, Blumenthal commented that he had yet to review any new text, highlighting the lengthy and arduous journey this legislation has endured.
“Recent developments could indeed lend KOSA some momentum, though they don’t alter the core political dynamics,” Andrew Rocay, a senior research analyst, pointed out. He added that translating this momentum into concrete policy action remains a challenging endeavor, as Congress has historically been sluggish in addressing technology-related legislation.





