SELECT LANGUAGE BELOW

Microsoft scrambles to update its free AI software after Taylor Swift deepfakes scandal

Microsoft has cracked down on the use of its free AI software after the tool was linked to creating sexually explicit deepfake images of Taylor Swift that took social media by storm, raising fears of a lawsuit by the outraged singer. It has increased.

The tech giant is pushing an update to its popular tool called Designer, a text-to-image program powered by OpenAI’s Dall-E 3, adding “guardrails” to prevent non-consensual use of photos, the company said. Ta.

A fake photo showing a naked Swift surrounded by Kansas City Chiefs players, a nod to her highly publicized romance with Travis Kelce, was posted to Microsoft’s website before circulating on X, Reddit and other websites. It was traced back to designer AI. Technology-focused site 404 Media reported on Monday..

“We are investigating these reports and are taking appropriate steps to address them,” a Microsoft spokesperson told 404 Media, which first reported on the update.

“We develop guardrails and other safety systems that align with our Responsible AI principles, including content filtering, operational monitoring, and abuse detection, to reduce abuse of our systems and create a safer environment for our users.” We have a large team working on this,” the spokesperson said. The company also said that in accordance with its code of conduct, Designer users who create deepfakes will no longer be able to access the service.

After a deepfake photo of Taylor Swift at a Kansas City Chiefs game circulated on social media, apparently referencing her relationship with Travis Kelce, Microsoft is using its designer tools to create an AI-generated photo of Taylor Swift. The ability to create nude images has been removed. Getty Images

A Microsoft representative did not immediately respond to The Post’s request for comment.

The update comes as Microsoft CEO Satya Nadella said technology companies need to “act quickly” to crack down on misuse of artificial intelligence tools.

Nadella, a major investor in ChatGPT creator OpenAI, called the spread of fake pornographic images of the “Cruel Summer” singer “alarming and horrifying.”

“We have to act. And frankly, all of us on technology platforms, regardless of what your position is on a particular issue,” Nadella said. said. Prior to interview on NBC Nightly Newsairs on Tuesdays.

“I don’t think anyone wants an online world that isn’t completely safe for both content creators and content consumers.”

Swift’s deepfake was viewed more than 45 million times on X and was eventually removed after about 17 hours.

According to a report in the Daily Mail, sources close to Swift specifically noted that X’s help center outlines a policy that prohibits posting “synthetic and manipulated media” and “non-consensual media.” Considering this, he was appalled and said, “Social media platforms even let them get away with it from the beginning.” nude. ”

Over the weekend, Elon Musk’s social media platform took the unusual step of blocking search results that include Swift’s name. Even if it’s harmless.

Microsoft is putting more guardrails on its artificial intelligence image generators, following Chief Executive Satya Nadella’s warning that tech companies need to “act quickly” to crack down on AI abuses. Added. Getty Images

X Company executive Joe Benarroch described the move as “a temporary measure, made out of an abundance of caution as safety is our priority in this matter.”

The ban remained in effect on Monday.

The controversy could create new headaches for Microsoft and other AI leaders, which already face intense legal, legislative and regulatory scrutiny over the fast-growing technology.

White House press secretary Karine Jean-Pierre called the deepfake trend “very alarming” and said the Biden administration “will do everything we can to address this issue.” .

The rise of AI deepfakes could emerge as a key theme when Meta CEO Mark Zuckerberg, TikTok CEO Shou Chew and other prominent technology executives testify before a Senate committee later this week. be.

Lawmakers in New York and New Jersey are working to make sharing AI-generated pornographic images without consent a federal crime, with penalties such as jail time, fines, or both. AFP (via Getty Images)

Earlier this month, Representatives Joseph Morrell (D-N.Y.) and Tom Keene (R-N.J.) made the nonconsensual sharing of digitally altered pornographic images a federal crime, punishable by prison terms, fines, or He reintroduced a bill that would impose both such penalties. .

The Intimate Image Deepfake Prevention Act has been referred to the House Judiciary Committee, which has not yet decided whether to pass the bill.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News