Instagram Introduces New Safety Measures for Minors
Mark Zuckerberg’s Instagram has launched updated safety features aimed at protecting teens and children. Earlier this year, he revealed that the platform had to block close to 135,000 accounts due to predatory behaviors.
Meta has drawn criticism from federal and state authorities for its inability to safeguard children. However, on Wednesday, the company announced that its safety team “blocked nearly 135,000 accounts that left sexual comments or sought sexual images from adult accounts featuring children under 13.”
“We have also removed an additional 500,000 Facebook and Instagram accounts tied to these original accounts,” the company detailed in a blog post.
They noted, “We have informed users that we are deleting accounts that have interacted inappropriately with content, urging them to remain cautious and report any issues.”
As of now, teenagers receive enhanced information when they receive messages from unknown accounts. This was highlighted in a blog post on Wednesday, which provided details about the account’s age along with safety tips.
Instagram has also revamped its “block and report” feature for teenage accounts, allowing users to flag predatory accounts swiftly while blocking them, instead of dealing with each action separately.
In June, teenage users reportedly blocked one million accounts and made over a million reports after receiving safety notifications.
The company is under pressure to reassure the public that Facebook and Instagram are safe spaces for users.
The focus on safety emerged during an FTC trial regarding potential spin-offs of Instagram and WhatsApp. Internal documents have shown that Meta officials are increasingly concerned about “groomers” targeting children.
Last year, Instagram automatically designated users under 18 to “teen accounts” and started restricting interaction with those who did not follow them.
Additionally, the platform has introduced features designed to shield younger users from messages containing nude images.
Earlier this year, a bipartisan group of US senators revived the Children’s Online Safety Act, establishing legal “duties of care” for Meta and other social media firms to protect young users from harm.
The law previously passed in the Senate with a significant 91-3 vote but has since stagnated in the House.





