SELECT LANGUAGE BELOW

What lawmakers are doing to protect deepfake victims like Taylor Swift

Even before pornographic and violent deepfake images of Taylor Swift began circulating widely in recent days, state legislatures across the country were exploring ways to quash such images without the consent of both adults and children. .

But in this Taylor-centric era, the issue has gained more attention since she became the target of a deepfake, a computer-generated image that uses artificial intelligence to appear real.

Here’s what you need to know about what each state has done so far and what they’re considering.

House of Representatives members shed light on how AI could make Congress ‘more efficient’

Where deepfakes appear

Last year, artificial intelligence became more mainstream than ever before, allowing people to create more realistic deepfakes than ever before. Nowadays, they often appear online in various forms.

There’s porn that uses celebrities like Swift to create fake, compromising images.

Taylor Swift cheers on an NFL football game in Baltimore on January 28, 2024. The pop star has recently been the victim of a deepfake, an AI-generated realistic image of her face in a pornographic image. (AP Photo/Julio Cortez, File)

The music was there – songs that sounded like Drake and The Weeknd performing together were clicked millions of times on streaming services – but it wasn’t those artists. This song has been removed from the platform.

And in this election year, there’s a dirty political trick: Just before January’s presidential primary, some New Hampshire voters received robocalls from President Joe Biden telling them to “don’t bother voting.” reported that they received it. The state attorney general’s office is investigating.

However, a more common situation is pornography using the likeness of non-celebrities, including minors.

What countries have done so far

Deepfakes are just one area in a complex field of AI that lawmakers are considering whether and how to handle them.

At least 10 states have already enacted deepfake laws. More measures are being considered in Congresses across the country this year.

Georgia, Hawaii, Texas, and Virginia have laws on the books that criminalize nonconsensual deepfake pornography.

California and Illinois have given victims the right to sue those who create images using their likeness.

Minnesota and New York do both. Minnesota’s law also covers the use of deepfakes in politics.

Is there a technical solution?

Shiwei Liu, a computer science professor at the University at Buffalo, said he is working on several approaches, but none are perfect.

One is a deepfake detection algorithm, which can be used to flag deepfakes on social media platforms and more.

Another, which Lyu said is in development but not yet widely used, would embed code in the content people upload that would let them know whether it will be reused in AI creation.

And a third mechanism is to require companies that provide AI tools to include digital watermarks to identify the content generated by their applications.

He said it makes sense to hold these companies accountable for how people use their tools, and that they could enforce user agreements against creating problematic deepfakes. Ta.

What should the law say?

A model bill proposed by the American Legislative Exchange Council deals with pornography, not politics. Conservative, pro-business policy groups are encouraging states to do two things. One is to criminalize the possession and distribution of deepfakes depicting sexual acts by minors, and the other is to allow victims to sue those who distribute non-consensual deepfakes depicting sexual acts. It’s about acknowledging that.

“We encourage lawmakers to start with small, prescriptive amendments that address specific problems,” said Jake Morabito, who heads ALEC’s Communications and Technology Task Force. He warned that lawmakers should not target technology that can be used to create deepfakes. This is because innovation in other important applications could be blocked.

Todd Helmuth, a behavioral scientist at the nonpartisan Rand Corporation think tank, said it’s not enough to leave law enforcement in the hands of individuals who bring lawsuits. Filing a lawsuit requires resources, he said. And the results may not be worth it. “It’s not worth suing someone who doesn’t have money to give you,” he said.

Dean Phillips vows to be ‘first AI president’ in campaign speech on artificial intelligence platforms

Helmuth calls for system-wide guardrails, which he says will likely require government involvement to make them work.

He said OpenAI and other companies that can use its platform to generate seemingly realistic content should work to prevent deepfakes from being created. Social media companies should have better systems in place to prevent the spread of social media, and those who do so should be subject to legal penalties.

Jenna Leventov, a First Amendment attorney with the ACLU, said that while deepfakes can cause harm, free speech protections also apply, so lawmakers should not be concerned about defamation, fraud, or fraud. He said there was a need to not go beyond existing exceptions to free speech. Obscenity when trying to regulate emerging technology.

Last week, White House press secretary Karine Jean-Pierre addressed the issue, saying social media companies should create and enforce their own rules to prevent the spread of false information and images like Swift’s.

What is being proposed?

In January, a bipartisan group of lawmakers introduced federal legislation that would give people property rights to their likeness and voice, and sue anyone who uses it in a misleading way through deepfakes for any reason. gave the right.

Most states are considering some kind of deepfake bill in this year’s legislative session. They are being introduced by Democrats, Republicans, and a bipartisan coalition of lawmakers.

High-profile bills include one in Republican-majority Indiana that would make it a crime to distribute or create sexually explicit depictions of individuals without their consent. It passed unanimously in the House of Representatives in January.

A similar measure introduced this week in Missouri is dubbed the “Taylor Swift Law.” And in South Dakota, which passed the Senate this week, Attorney General Marty Jackley said some investigations were turned over to federal authorities because the state lacks the necessary AI laws to prosecute.

CLICK HERE TO GET THE FOX NEWS APP

“When you go on someone’s Facebook page, you’re stealing their child and putting it in porn. You don’t have a First Amendment right to do that,” Jackley said.

What can a person do?

For those with an online presence, it can be difficult to avoid becoming a victim of deepfakes.

But Randland Institute’s Hellmuth said people who realize they’ve been targeted can ask the social media platforms where the images are shared to take them down. If you are in an area where the law is required, please call the police. Tell school or university officials if the alleged perpetrator is a student. And seek mental health help if needed.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News