SELECT LANGUAGE BELOW

Meta Oversight Board calls on company to investigate how content moderation changes could impact human rights

The Meta Oversight Board is asking the company to assess how recent changes to its content moderation policy affect the human rights of some users, including users from the LGBTQ community.

The Oversight Committee issued 11 case decisions on Wednesday, marking the first case taking into account changes to policy and enforcement announced by Facebook and Instagram’s parent companies at the beginning of the year.

“Our decision was made on January 7, 2025, when policy and enforcement changes were rushed out of regular procedures, and there is concern that no public information has been shared about the previous human rights due diligence that the company had made,” the board wrote in a release.

The board specifically points to the meta decision to remove LGBTQ protection from hate speech rules amid a broader overhaul of content moderation practices. Under the changes, Meta is now able to accuse LGBTQ individuals of being mentally ill, despite users banning such content.

“Given political and religious discourse about transgenderism and homosexuality, we allow allegations of mental illness or abnormality based on gender and sexual orientation,” Mehta’s policy states.

“As change is unfolding globally, the board emphasizes that it is essential to identify and address the negative impacts on human rights that meta may arise from them,” the board wrote.

This includes examining the potential negative impacts on majority countries around the world, LGBTQ users, minors and immigrants, according to the release. The board recommended that Meta update its progress every six months and publicly reported its findings “soon.”

Eleven cases and the board reviewed by the board on issues of freedom of expression noted that there is a “high threshold” to limit speech under the international human rights framework.

For example, in two cases related to gender identity discussion videos, the board supported the decision to allow two posts on transgender people’s bathroom access and participation in US athletic events.

“Despite the intentionally provocative nature of the post, trans people who can be misidentified in a way that many feel offensively found they were related to issues of public concern and decided they would not incite any possible immediate violence or discrimination,” the board wrote.

The board also recommended that Meta improve enforcement of its bullying and harassment policies.

Meta CEO Mark Zuckerberg described the January change as “an effort to return to our roots, reduce mistakes, simplify our policies, and restore free expression.”

In doing so, he also announced the elimination of Meta’s fact-checking program. The system was replaced by a user-dependent community-based process, and sent notes or corrections to posts that could be misleading or missing contexts.

The fact-checking program officially ended in the US earlier this month, and Meta began testing its Community Notes feature last month. We used X’s open source algorithm as an evaluation system that determines whether the notes will be published or not.

The board states that Meta “continuously evaluates the effectiveness of community notes compared to third-party fact-checking, especially in situations where the rapid spread of false information poses a risk to public safety.”

I’ve also used meta for years Artificial Intelligence Technology Proactively detect and delete violation content before it is reported. The board said META should assess whether reducing reliance on automated technology could have an impact worldwide, especially in countries facing crisis.

The board operates independently of the meta and is funded by grants provided by the company. If adopted, we can provide non-binding policy recommendations that could have a widespread impact on the company’s social media platforms.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News