SELECT LANGUAGE BELOW

Meta Reduces Censorship Mistakes by 90% Following Zuckerberg’s Focus on Free Speech

Meta Reduces Censorship Mistakes by 90% Following Zuckerberg's Focus on Free Speech

Meta Reports Significant Reduction in Content Moderation Errors

Meta has announced a more than 90% reduction in what it terms content moderation “errors” on Facebook and Instagram since CEO Mark Zuckerberg shifted the company’s approach to free speech back in January.

In the Q3 2025 Integrity Report, Meta introduced a new global metric that evaluates the “accuracy of enforcement,” revealing a notable decline in accidental deletions, even as some content rules have been relaxed.

According to Meta representative Frances Brennan, “Since we began working to reduce over-policing, we have seen a global reduction of more than 90% in weekly mis-policing across Facebook and Instagram.” She noted that out of the hundreds of billions of posts created during the quarter, less than 0.1% were deleted by mistake.

Furthermore, the report states that of the immense volume of content posted globally in that quarter, only a fraction—less than 1%—was removed due to policy violations, while again, less than 0.1% was deleted by accident. Meta claims that enforcement accuracy rates exceed 90% for Facebook and 87% for Instagram. This essentially implies that, even though around 10% of deletions were incorrect, still fewer than 1 in 1,000 posts were removed mistakenly.

The positive results follow Meta’s January announcement, where they pledged to permit more free expression. This came after facing criticism over opaque and overly strict regulations. The company has been concentrating on serious violations, implementing a more focused use of automated systems for moderation.

Interestingly, the nature of content violations remained largely consistent across various categories, with a few exceptions noted. These included an uptick in adult nudity, graphic violence, and bullying on Facebook. Experts believe that these spikes don’t necessarily reflect an increase in actual violations but are likely due to shifts in how reviewers are trained and how content samples are evaluated.

Additionally, the report highlighted the introduction of new AI tools aimed at enforcement and other integrity initiatives. Among these advancements is a sophisticated large-scale language model that reportedly outperforms current automation and even some human reviewers. Meta also shared its efforts, noting it forwarded over 2 million “cybertips” related to child exploitation to the National Center for Missing and Exploited Children during the quarter.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News