Elon Musk’s X said it would hire 100 full-time staff to crack down on child sexual exploitation after an AI-generated photo of Taylor Swift went viral on social networks last week.
The San Francisco-based company, formerly known as Twitter, on Friday built a “trust and safety center” in Austin, Texas, and hired “in-house agents” who will be tasked with enforcing the site’s content and safety rules. announced. .
“While X does not have a child-focused line of business, it is important that we make these investments to continue to prevent criminals from using our platform to distribute and engage with CSE content. ,” said Joe Benarroch. The business operations manager of Company X said:
Musk, who had acquired the site formerly known as Twitter for $44 billion in late 2022, has been criticized for cutting jobs from the company’s trust and safety operations with the aim of allowing free speech. Masu.
X bosses have come under fire in recent weeks for spreading anti-Semitic and neo-Nazi content on their platform, which has caused some advertisers to flee the site.
We were forced to block some searches for Swift in recent days due to pornographic deepfake images of the X singer circulating online.
Attempting to search for her name without quotes on the site on Monday resulted in an error message asking users to retry the search and saying, “Don’t worry, it’s not your fault.” No,” it added.
However, by placing a quote mark over her name, posts that mentioned her name appeared.
Last week, explicit and abusive fake images of Swift began circulating widely on X, making her the most high-profile victim in a scourge that tech platforms and anti-abuse organizations are struggling to solve.
“This is a temporary action and is being taken out of an abundance of caution as we prioritize safety regarding this matter,” Benarroch, X’s head of business operations, said in a statement.
Unlike the more common doctor images that have plagued celebrities in the past, the swift images appear to have been created using an artificial intelligence image generator that can instantly create new images from written prompts.
After the images began circulating online, the singer’s devoted “Swifties” fanbase quickly mobilized, launching an anti-offensive X and #ProtectTaylorSwift hashtag to promote a more positive image of the pop star. I filled it with images.
Some said they were reporting accounts sharing deepfakes.
The group Reality Defender, which detects deepfakes, said it tracked a deluge of non-consensual pornographic material depicting Swift, specifically known as Twitter.
Some images were also directed to Mehta-owned Facebook and other social media platforms.
Researchers discovered at least a dozen unique AI-generated images.
The most widely shared videos were soccer-related, objectifying Swift in paint and blood, and in some cases violently harming her deepfake persona.
Swift images first appeared from an ongoing campaign (which I think you should avoid) that started on the Fringe platform last year. Memetica.
One of the quick images that went viral last week appeared online as early as Jan. 6, he said.
While most commercial AI image generators have safeguards to prevent abuse, anonymous message board commenters discussed tactics for how to avoid moderation, specifically for Microsoft Designer’s Text to Image tool. , said Decker.
“This is part of a long-standing adversarial relationship between trolls and platforms,” Decker said.
“As long as platforms exist, trolls will try to disrupt them. And as long as there are trolls, platforms will be disrupted. So the question is, how many more times will this happen before there are serious changes?” That’s the thing.”
X’s move to reduce searches in Swift is probably a stop measurement.
“When you don’t know where everything is and you can’t guarantee that everything has been knocked down, the easiest thing you can do is limit people’s ability to look for it,” he said.
Researchers say the number of blatant deepfakes has increased in recent years as the technology used to create these images has become more accessible and easier to use.
A report released in 2019 by AI company DeepTrace Labs showed that these images are overwhelmingly used as a weapon against women.
Most of the victims were Hollywood actors and Korean K-pop singers.
In the European Union, separate new legislation includes provisions for deepfakes.
The Digital Services Act, which came into force last year, requires online platforms to take steps to limit the risk of spreading content that violates “fundamental rights” like privacy, such as “non-consensual” images and deepfake pornography. doing.
The 27-nation bloc’s artificial intelligence law, which is still awaiting final approval, would require companies that create deepfakes with AI systems to notify users that their content is artificial or manipulated. Masu.
with post wire





