SELECT LANGUAGE BELOW

Instagram’s design continues to pose risks for teenagers: Report

Instagram's design continues to pose risks for teenagers: Report

New Findings on Instagram’s Teen Safety Features

A recent report has emerged from a group of whistleblowers at Meta and researchers from various universities. This report claims that many of Instagram’s safety features, developed in response to pressure from Congress and the public, are ineffective when it comes to protecting young users.

According to this analysis, almost two-thirds of the safety features offered on social media platforms have been deemed ineffective or are no longer in use. The findings come from former Facebook Engineering Directors Arturo Béjar and a research initiative out of New York University and Northeastern University.

In the report’s foreword, Ian Russell and Maureen Morack, who both lost children to issues related to cyberbullying and exposure to harmful content on Instagram, expressed their disappointment. They wrote, “Meta’s new safety measures are ineffective.”

Russell’s Molly Rose Foundation and the parents of another victim, along with a children’s safety advocacy group, contributed to the report as well.

The authors pointedly stated, “If anyone believes that the company will willingly change its priorities to focus on youth well-being over engagement and profit, we hope this report changes their minds.”

Researchers evaluated 47 safety features and found that 64% received a “red” rating, indicating they were “easy to avoid or circumvent in less than three minutes.” This assessment included keyword filtering, warnings about content, and certain blocking functionalities between adults and teenagers.

Furthermore, 19% of Instagram’s safety features were rated “yellow,” implying they somewhat reduce harm but still have issues. For example, the ability to swipe away inappropriate comments was labeled as “yellow” because accounts can simply keep commenting without meaningful consequences.

Tools designed for parental supervision that aim to limit teen usage or provide alerts about reports were categorized in this middle tier too, as many parents are likely not utilizing them.

The remaining 17% of safety tools included options like turning off comments, restricting tags for teens, and encouraging parental approval for changes to account settings.

In response, Meta, which oversees both Instagram and Facebook, labeled the report as “misleading, dangerous, and speculative,” arguing it misrepresents their efforts to enhance safety for teenagers.

The company emphasized, “We regularly empower parents and protect teens, and this report misunderstands how our safety tools function.” They added that teens are exposed to less harmful content and have reduced contact with others by using these features.

Meta also expressed concern about how the report evaluated its safety updates. They noted discrepancies where some tools received lower ratings despite functioning as intended. For instance, the blocking feature was rated “yellow,” even though it was acknowledged as effective. Researchers claimed that the inability of users to explain their blocking choices diminished its utility.

Béjar, who formerly worked at Facebook and contributed to the report, recently testified before Congress. He accused the company’s CEO of ignoring warnings regarding teens facing unnecessary sexual advances and harassment on the platform. His testimony followed earlier allegations from whistleblower Francis Haugen about Facebook’s awareness of its products’ detrimental effects on the mental health of young girls.

Fast forward nearly four years, and more whistleblowers from Meta have raised alarms about the company’s safety protocols. Current and former employees have accused leadership of restricting internal safety investigations, especially concerning younger users engaging with virtual and augmented reality platforms.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News