Meta, Instagram's parent company Mark Zuckerberg, has announced an apology after users reported a flood of violent, sexual content appearing on their Instagram reel feed. Some of the scary content proposed by the company's algorithms should not have been permitted on the platform, not to mention recommended for unsuspecting users.
CNBC Report Meta from Mark Zuckerberg, the parent company of Facebook and Instagram, has confirmed and corrected an “errors” that show waves of intrusive content in users' Instagram reel feeds. The issue came to light when many Instagram users went to social media to express concern about the sudden influx of violent, sexual and inappropriate content.
In a statement shared with CNBC, a Meta spokesman said: I apologize for the mistake. “The company has emphasized that it will work to protect users from getting in the way of images and remove particularly violent or graphic content in accordance with its policy.
Despite Instagram's “sensitive content control” feature being the most highly configured, some users claimed they are still exposed to offensive content. Meta's policy prohibits content such as cuttings, visible internal organs, burnt bodies, and videos depicting sadistic statements, directed towards images depicting human and animal suffering. However, the company allows some graphic content if it helps users raise awareness of key issues such as human rights abuses, armed conflicts, and terrorist acts.
CNBC was able to display some posts on Instagram reels. This appears to indicate corpses, graphic injuries and violent attacks. Meta claims that using internal technology and a team of over 15,000 reviewers, artificial intelligence and machine learning tools will be preferred over posts and detect and delete distracting images by removing the majority of content that is violated before users report it.
Please read more CNBC is here.
Lucas Nolan is a reporter for Breitbart News, which covers the issues of freedom of speech and online censorship.





