Ruling Against Mehta and Google in Social Media Addiction Case
A significant ruling in a high-profile case regarding social media addiction could expand the responsibilities of platforms for harmful content. The lawsuit focuses on a 20-year-old woman from California, referred to as KGM, who claims that the platform fostered addictive usage during her teenage years and contributed to her depression and suicidal ideations through its engagement-driven design.
Both companies have denied any wrongdoing, pointing to their safety tools and parental controls. Mehta stated, “We respectfully disagree with these judgments and plan to appeal. Blaming the complex issue of teenage mental health on a single cause risks overlooking broader challenges that many teens relate to. Digital communities often provide a sense of belonging, and we remain dedicated to fostering a safe environment for youth while defending our practices vigorously.”
Jose Castañeda, a spokesperson for Google, echoed these sentiments by expressing his disagreement with the ruling and the intention to appeal. He expressed that this situation mischaracterizes YouTube as a responsibly constructed streaming platform, not merely a social media site.
The lawsuit notably bypassed Section 230, which generally shields platforms from liability concerning user-generated content. Instead, it targets the product designs employed by both companies, a move that might have extensive implications for how platforms manage hateful content, especially in terms of monetization.
Liora Rez, the founder of Stop Antisemitism, described the ruling as “monumental,” arguing that advocacy groups are alerting major tech firms that their algorithms are negatively impacting users. This is due not only to their addictive nature but also because they can promote hatred.
Rez pointed out a concerning trend: “We’ve moved from platforms failing to eliminate anti-Semitism to actively disseminating hateful content, often incentivizing those who propagate it through their monetization strategies.”
Despite the existing policies against certain types of content—especially those promoting violence or hatred—some influencers have begun using coded language to evade censorship. For instance, phrases like “not alive” have started popping up in discussions about violence.
Rez acknowledged that while hate-spreaders might devise similar euphemisms, “decision-makers” at these platforms are aware of the underlying issues. The organization often recognizes troubling terms quickly, especially since the demographic most affected is typically under 25.
The founder indicated that AI-generated content will be a significant focus in future battles, noting that the issue has already started to surface. “We’re very concerned about AI’s role in perpetuating anti-Semitic content, which lacks sufficient oversight. We hope this ruling brings about some change,” she said.
She also highlighted instances of AI-generated accounts, which have amassed thousands of followers, often advancing anti-Semitic narratives while obscuring their true nature with a facade of anonymity.
One account, now deleted, had gained 1.5 million followers in a brief period despite being flagged multiple times for its authenticity. It utilized Yiddish gibberish to discuss purported secrets known only to Jews.
Regarding this AI-generated content, Rez mentioned that the algorithm’s tendency to prioritize trending videos leads to a cycle where problematic content becomes more visible. “From what we’ve seen, videos that attract attention often lead to more troubling posts due to the algorithm’s behavior,” she noted.
While expressing concern about AI’s growing influence, Rez remains hopeful that social media companies will respond appropriately to the issues surrounding hate content, especially following this recent ruling. “They really need to improve their efforts; their failure to warn users is a failure to protect them, which ultimately leads to real harm,” she stressed. Rez believes this ruling sets a potential precedent for future large-scale litigation, urging companies to take it seriously.





