The Meta Oversight Board is asking the technology companies behind Facebook and Instagram to take steps to update how they track enforcement of bans on Holocaust denial and false content, according to a decision released Tuesday.
The Meta Oversight Board, which operates independently from Meta and is funded by grants from the company, uses technology to leave behind Instagram posts that spread false and distorted information about the Holocaust, along with policy recommendations. overturned the company's decision.
The commission said the post included a meme of the SpongeBob SquarePants character Squidward and listed false claims under a speech bubble titled “Fun facts about the Holocaust.” Mehta was relieved of his post in August following an announcement by the board of directors. Selected case to review.
When the board took the matter up for review, Instagram had allowed posts questioning the number of Holocaust victims and the existence of Auschwitz crematoriums to remain on the platform.
This content was originally posted to Instagram in September 2020, a month before Meta updated its hate speech guidelines to explicitly ban Holocaust denial.
In addition to reversing the content decision, the oversight board recommended that Meta take steps to create a system to label enforcement data to help the company track Holocaust denial postings and enforcement of bans.
According to a decision posted by the board, when the board first asked Mehta how effective its moderation system was in removing Holocaust-denying content, Mehta told the board, “We were unable to provide that,” he said.
As part of its review of this matter, the committee found a COVID-19 automation policy. This policy was created at the beginning of the COVID-19 pandemic due to the reduced capacity of human judges when Meta sent judges home early in the pandemic. , this policy was still in effect as of May 2023. The policy led to some reports that Holocaust posts were automatically closed, the board said.
The oversight committee has asked the company to “publicly confirm” whether it has completely ended the COVID-19 automation policies introduced during the pandemic. The board ruled that users' appeals over Instagram's decision to leave posts in May 2023 would be automatic, shortly after the World Health Organization and the United States declared COVID-19 no longer a public health emergency. I wondered why I received the termination message.
The commission's ruling said the post at the center of the incident had been flagged as hate speech six times, four of which were posted before the policy update banning Holocaust denial.
Two of the six reports led to human review, and the rest were reviewed by automation and rated as either no violation or “self-termination” based on Meta's COVID-19 automation policy.
The more recent of the two cases that underwent human review occurred in May 2023. The commission said the user who reported the content appealed Meta's decision to leave the post up, but the appeal ended automatically.
“We welcome it,” Mehta said. Board of Directors decision In a statement.
“The board has reversed Meta's original decision to leave this content. Meta has previously removed this content and no further action will be taken.”
“In accordance with our bylaws, we will also begin reviewing identical content with parallel contexts. If we determine that we have the technical and operational capacity to take action against that content as well, we will take action promptly,” the company said. added.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.





