Meta announced an apology Wednesday night after “Error” generated Instagram's recommended algorithm.
This issue has affected a wide range of users, including minors.
The nasty content was recommended to users without consent, but featured graphic depictions of an individual being shot, driven into a vehicle and suffering horrific injuries.
Some videos featured “sensitive content” warnings, while others displayed without restrictions.
a Wall Street Journal Reporter Instagram account People were shot, crushed by machines and flooded with serial clips of people being violently kicked out of amusement park rides.
These videos came from pages with names such as “BlackPeoplebeingHurt”, “ShockThreat” and “PeopleDingHub”.
Several metrics in these posts suggest that Instagram's algorithm dramatically increased their vision.
View counts for certain videos exceed millions of other posts from the same account.
“Fixed an error that caused some users to display content in their Instagram reel feeds.
“I apologize for the mistake.”
Despite the apology, the company refused to designate the scale of the issue.
But even after Meta claimed the issue was resolved, Wall Street Journal reporters continued to watch videos depicting the shooting and fatal accident late Wednesday night.
These ominous clips appeared along with paid ads from law firms, massage studios and e-commerce platform Temu.
This incident also occurs especially with regard to automatic detection of offensive materials as META continues to adjust its content moderation policy.
In a statement issued on January 7th, Meta announced that it would change the way certain content rules were enforced, citing concerns that past moderation practices have led to unnecessary censorship.
As part of the shift, the company said it would tune its automated systems to focus solely on “illegal, high-strength violations such as terrorism, sexual exploitation of children, drugs, fraud, and other trash.”
For less serious violations, Meta has shown that it relies on users to report problematic content before taking action.
The company also admitted that the system was overly offensive in demoting posts that could potentially violate its standards, and said it was in the process of eliminating most of those demots.
Meta also expanded AI-driven content suppression in some categories, but we did not confirm whether violence and Gore policies have been changed as part of these adjustments.
According to the company's transparency report, Meta removed more than 10 million violent and graphic content from Instagram from July and September last year.
Almost 99% of that material was actively flagged and removed by the company's system before users could report it.
However, Wednesday's incident made some users unstable.
Gran Robinson, 25, who works in the supply chain industry, was one of the people affected.
“It's hard to understand that this is what I'm being served,” Robinson told the Journal.
“I saw 10 people die today.”
Robinson noted that similar videos have appeared in all male friends, ages 22 to 27, but no one is usually involved in violent content on the platform.
Many interpret these changes as Zuckerberg's efforts to repair relations with President Donald Trump, who was a critic of the meta's moderation policy voice.
A company spokesperson confirmed on X that Zuckerberg had visited the White House earlier this month to discuss whether Meta could help defend and advance American technical leaders overseas.
Changes in meta moderation strategies occur after significant staffing reductions.
During a series of high-tech layoffs in 2022 and 2023, the company cut approximately 21,000 jobs (nearly a quarter of the workforce), including citizen integrity, trust and safety team positions.

