Meta’s Content Moderation Report Reveals Significant Error Reduction
On Thursday, Meta, led by Mark Zuckerberg, published its third quarter report on content moderation, highlighting a substantial 90 percent decrease in global enforcement errors since shifting away from third-party fact-checking and more censorship-like approaches.
The report indicates that weekly enforcement errors on Facebook and Instagram have decreased dramatically, dropping by over 90 percent. This means that, out of the immense amount of content generated, less than 0.1 percent was mistakenly removed.
Earlier this year in January, Zuckerberg announced a significant update to the company’s content moderation strategies, moving away from strict fact-checking toward a model that promotes free speech and reduces censorship. In May, Meta noted a 50 percent decline in enforcement errors since Donald Trump took office, as stated in its first quarter 2025 report.
Meta has assessed its accuracy in content removals, claiming that over 90 percent of deletions on Facebook and more than 87 percent on Instagram were justified.
According to Meta, this implies that about one in ten removed pieces of content, and less than one in one thousand overall, were deleted by accident.
In the report, Meta acknowledged ongoing challenges, particularly with issues such as adult nudity, sexual content, and violence on both platforms. It also reported a rise in bullying and harassment on Facebook, which it attributed primarily to improvements in reviewer training and workflow enhancements.
Furthermore, Meta noted a 16.3 percent increase in government requests for user data globally. India was the leading country with a remarkable 31.9 percent increase, followed by the U.S. at 8.6 percent, and then Brazil, Germany, and France.
In the U.S. alone, there were 81,064 requests during the first half of 2025, showing an 8.6 percent rise. Notably, 77.3 percent of these requests came with confidentiality orders, preventing Meta from informing the users involved. Emergency requests constituted about 6 percent of all requests from the U.S.
Meta also reported that it’s been experimenting with artificial intelligence for content moderation, claiming that AI has outperformed human reviewers in areas like celebrity impersonation and typical fraud cases. The company plans to further integrate AI models to enhance its content review processes.





