If past behavior is any indication of future actions, it’s understandable why parents might be skeptical about Meta. The company has repeatedly put child safety on the back burner when it comes to platforms like Facebook and Instagram. Often, changes seem to happen only after significant scrutiny or pressure. It’s frustrating, really, because many parents feel like they’re struggling to keep pace with the rapid evolution of technology. And if Meta genuinely cares about the safety of children, it might be worth exploring fresh strategies that involve parental input.
As a parent myself, I find it exhausting to read endless reports about engagement metrics while children’s safety often seems overshadowed by corporate interests. Whether it’s the promotion of untested technologies or exposure to harmful content, it’s clear the company is making decisions that could have serious implications for kids’ mental health.
Recently, the introduction of an AI “Digital Companion” by Meta has sparked criticism. This tool, meant to engage users in friendly conversation, is now available to users as young as twelve. Reportedly, it isn’t just benign chatter; it has reportedly become embroiled in issues involving inappropriate content when interacting with minors. Although Meta claimed to set guidelines to restrict such exchanges, internal findings suggested that the AI could generate inappropriate replies if a user identified as thirteen.
Dr. Nina Vasan, a psychiatrist at Stanford, referred to the increasing prevalence of AI companions among children as concerning, pointing out that they are failing the fundamental ethical standards for child safety. Moreover, these chatbots are being used in alarming ways—some teens are reportedly creating and sharing inappropriate images using AI tools.
Dr. Vasan’s insights shouldn’t require a task force to validate. Anyone parenting in this digital environment knows the emotional and developmental risks posed by these technologies. It raises questions about whether Meta is genuinely committed to child safety.
Unfortunately, there’s a troubling history here. Reports indicate that Meta’s Instagram can recommend sexual content to users who are set up as thirteen within a few minutes. Another investigation revealed that Instagram’s platform inadvertently fosters a vast network of accounts aimed at exploiting minors.
This leads to the unsettling conclusion that the algorithm, perhaps unwittingly, amplifies inappropriate and potentially harmful content. Even internal reviews have pointed out that Meta’s own tools can be misused to promote questionable content.
Time and again, when these issues come to light, Meta asserts that they have introduced fixes or additional parental controls. Yet it feels like parents are left to figure out the fallout on their own. Why is there such a disparity between what’s promised and what actually transpires?
I find myself reluctant to use any products from Meta. As a member of the American Union of Parents, my organization has been vocal about pressing Congress to investigate the repeated failures of Meta regarding child safety.
Regardless of whether Congress decides to intervene, the company has the capacity to implement changes right now. One suggestion could be forming an external advisory board made up of parents. While experts can provide invaluable insights, it’s the real-life experiences of parents raising kids that can truly inform product development and safety measures.
This board should have the authority to flag risks and make public recommendations. If Meta is serious about addressing these dangers, it should be open to external oversight.
Some families choose to curtail access to smartphones and social media, and I believe more might consider this route. While not everyone can—or will—limit access in that way, parents often feel outmatched by algorithms and changing technologies. Many are trying to find a balance between supervision and independence for their kids in the digital landscape.
Those of us navigating these challenges recognize the emotional and developmental risks these platforms pose. The mental and physical safety of children ought to be a priority for tech leaders and elected officials alike. Some form of accountability, such as a thorough Congressional investigation into Meta’s practices, seems necessary to ensure basic protections for kids.
Ultimately, true change can only happen when parents are included in these conversations, having a permanent seat at the table. Until that day arrives, many parents will have little choice but to impose strict limits on their children’s access to Meta’s platforms.
