Meta Inc.’s oversight board is reportedly investigating some of the company’s policies and past actions regarding explicit AI-generated images of women and deepfakes. pc mug.
Oversight committee investigation I give two specific examples. There, explicit AI-generated images of real women were posted on both Facebook and Instagram.One reportedly occurred in India and the other in the United States.
In one case, an AI-generated photo of a naked woman being molested by a man was posted to a Facebook group dedicated to AI-generated content. That post was eventually deleted. The board chose not to reveal the woman’s identity to protect her privacy and prevent further harassment. However, the woman in question is known to be a prominent American.
According to PC Mag, the image of the nude woman was submitted to a media matching database to be detected if it appeared on other platforms owned by Meta.
The monitoring committee noted that the second incident involved “an AI-generated image of a naked woman posted on Instagram.” The image was created using artificial intelligence (AI) to resemble a celebrity in India. The account that posted this content only shares an AI-generated image of an Indian woman. Most of the users who responded had accounts in India, where deepfakes have become a growing problem. ”
The woman reported the content to Meta as pornographic, but the company did not respond to the report within 48 hours, so the images were left alone. The woman appealed Meta’s initial decision to leave the content alone, but that too was automatically terminated.
“The users then appealed to the board,” according to the oversight board’s own report. “As a result of the Board’s selection of this matter, Meta determined that its decision to leave the content up was a mistake and removed the post for violating our Community Standards for Bullying and Harassment.”
BBC report Deepfakes originating from India are becoming a serious problem. Most deepfakes target celebrities.
AI expert Aarti Samani told the BBC that “Hollywood has so far borne the brunt”, with actresses such as Natalie Portman and Emma Watson among the high-profile victims. He added that
Samani added that AI has made it easy to create fake videos and audio of people.
“The tools have become much more sophisticated over the past six months to a year, which explains why we’re seeing more of this content in other countries,” Samani said.
“There are now many tools available that allow you to create realistic composite images at almost no cost, making them very accessible.”
Samani went on to say that India experiences unique issues with deepfakes, with a large youth population leading to increased use of social media and ultimately “a fascination with Bollywood and an obsession with celebrity culture.” This suggests that it is bringing about
“Bollywood celebrity content makes for attractive clickbait and generates significant advertising revenue. Data that the people who engaged with the content don’t know about can also be sold.”
Do you like Blaze News? Avoid censorship and sign up for our newsletter to get articles like this delivered straight to your inbox. Register here!





