SELECT LANGUAGE BELOW

Instagram will alert parents if teenagers frequently search for topics related to suicide.

Instagram will alert parents if teenagers frequently search for topics related to suicide.

Instagram to Notify Parents About Kids’ Searches on Self-Harm

Instagram announced it will start notifying parents when their children frequently search for terms associated with suicide or self-harm. However, this notification system will only apply to parents who are part of Instagram’s parental supervision program.

The platform already blocks a lot of this harmful content from appearing in searches by teen accounts and instead directs users to available helplines.

This announcement comes as Meta faces two ongoing trials concerning its impact on children. One trial in Los Angeles examines whether Meta’s platforms purposely create addictive environments that can harm minors. Meanwhile, another trial in New Mexico investigates whether Meta has done enough to protect young users from sexual exploitation. Numerous families, along with schools and government agencies, have filed lawsuits against Meta and other social media platforms, claiming these companies intentionally design their services to be addictive while neglecting to shield children from content that may lead to mental health issues, including depression and eating disorders, or even suicidal thoughts.

Meta’s executives, including CEO Mark Zuckerberg, have disputed the idea that their platforms foster addiction. In court, when questioned by the plaintiff’s attorney in Los Angeles, Zuckerberg maintained that he still believes there is insufficient scientific evidence proving that social media causes mental health issues.

The notifications to parents will be sent via email, text, or WhatsApp based on the contact information they provided, along with alerts through the parent’s Instagram account.

Meta stated, “Our goal is to empower parents to step in if their teen’s searches indicate they may need support. We also want to avoid sending these notifications too frequently, as excessive alerts could diminish their overall utility.”

Additionally, Meta mentioned they are developing similar notifications for parents regarding their children’s interactions with artificial intelligence.

“These notifications will alert parents if a teen engages in conversations related to suicide or self-harm with our AI,” Meta indicated. “This is crucial work, and we’ll have more updates in the coming months.”

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News