SELECT LANGUAGE BELOW

Deepfakes raise alarm about AI in elections

Experts, officials and observers say that in 2024 it will become easier to use artificial intelligence (AI) and spread synthetic content that could fuel disinformation and confuse voters in a critical election year. They are similarly sounding the alarm about the dangers posed by deepfakes.

Last week, a local Arizona newsletter published an AI-generated deepfake video of Senate candidate Kari Lake to alert readers to “how good this technology is.” In Georgia, lawmakers have proposed a bill to ban deepfakes, which play fabricated endorsement clips in political communications.

Nicole Schneidman, a technology policy strategist at the nonprofit watchdog group Protect Democracy, said AI poses a “massive increase” in threats to election systems. “Disinformation, voter suppression — what generative AI is really doing is making those threats more efficient.”

This advanced technology, which can generate images, audio and video, and digitally alter portraits and voices, is evolving rapidly, and academics and lawmakers are scrambling to keep up.

And voters are trying to navigate an election landscape where it becomes increasingly difficult to assess the credibility of photos, posts and videos every day.

“We’re already at the point where we don’t believe voters can rely on their senses to tell the difference between synthetic and real products,” Schneidman said.

In Arizona, a Substack newsletter last week sought to emphasize that “any idiot with a computer” can create and distribute relatively convincing deepfake content for free.

“Hello, I’m Kali Lake. Subscribe to the Arizona Agenda for harsh real-world news and a preview of the scary artificial intelligence coming in the next election. This video shows just how great this technology is. “This is an AI deepfake created by Arizona Agenda to show you. We’re getting it,” the Senate candidate’s face and voice say in the video.

Newsletter Author Hank Stevenson asked the audience To consider whether “even after our ‘deepfake Kali Lake’ told us she was fake, it took a second for her brain to catch up.”

The second video shows Lake’s renderings explaining how face swapping, voice cloning, and lip-syncing technology works.

What might have required a studio budget and a production team to create a few years ago can now be assembled by the average user in just a few clicks, says Professor of Political Science and Election Research Center at Baden University. Director Barry Barden says. Madison, Wisconsin. Additionally, the proliferation of social media platforms can allow fabricated content to be widely disseminated with little formal checking.

“I think the risk increases as we get closer to Election Day, because it could affect voters and the outcome of the election and may not be detected or corrected until after the votes are counted,” Baden said.

The narrow window until November also provides a tight timeline for driving new legal regulations.

In Georgia, The state legislature approved the bill. The law aims to crack down on “substantially deceptive media” in political communications, meaning content that appears to depict “the words and actions of real individuals that did not occur in reality.”

Rep. Brad Thomas, a Republican, introduced the idea in the Georgia Senate last week. audio clip This article shows that opponents of the bill have turned to support it, and that AI could then be used to misrepresent officials’ positions or announce false election campaigns. he warned.

“Some people learn the hard way, and a picture is worth a thousand words,” Thomas told The Hill when asked about performing deepfakes for his fellow senators. , a potential destabilizing factor. State Rep. Colton Moore (R), whose voice was heard in the video, said: opposed the bill online as an attack on “memes” and free speech.

“I think the demonstrations that the people who are creating deepfakes are doing to demonstrate the potential of deepfakes will be very helpful, because the public and many members of Congress need to understand how rapidly this technology is advancing. Because I don’t think they understand what’s going on,” Baden said.

In January, a robocall imitating President Biden’s voice urged thousands of New Hampshire voters not to vote in the Granite State primary.

Schneidman called the New Hampshire robocall a “milestone” example of how synthetic content can be deployed for voter suppression. The Associated Press reported that the caller claimed he was trying to warn people about AI, not to affect race.

In yet another example of AI’s growing presence in the election field, fake AI images of black voters supporting former President Trump as he courtes key demographics ahead of his November showdown with Biden. has been circulating on the internet.

“We are entering the first AI election in history, and the information ecosystem will be filled with fake videos, images, audio, robocalls, and more, and voters will not know what to believe. ” said Jonathan Mehta Stein. , executive director of California Common Cause, a nonprofit watchdog group.

But while the fake Biden robocalls were reported in national media and quickly debunked, Stein also wondered how AI could affect government and elections at the local level, where similar calls could go unchecked. He said he was more concerned about whether it would have a negative impact.

“I think the power of generative AI to sway local and state elections is really significant, especially in an era where local news organizations are in decline. It could be even more extreme than the threat to democracy in the US,” Stein said.

Hany Farid, a professor at the University of California, Berkeley’s School of Information Studies, said another concern is that AI will be used as a scapegoat.

“You can create fake content to harm a candidate or deter people from voting. But when you actually get involved in something stupid, illegal, embarrassing… You’ll scream, “It’s fake!” In other words, there is no reality beyond that, right? Now everything is in doubt,” said Farid, who runs the project. Deepfake tracking In the 2024 cycle.

“Before, if there was a video of you saying something, that was it. There was no further discussion. But that is no longer true,” Farid said. “It’s a really dangerous world we’re entering into, and no one knows what to believe anymore.”

Activists and policymakers are approaching AI from a variety of angles, advocating for digital literacy and pushing for laws and guidelines.

Biden issued an executive order on AI last year, including plans to create guidelines for content authentication and watermarking, and the Federal Communications Commission last month moved to target AI-generated robocalls after the New Hampshire incident. moved. A group of tech companies last month pledged to fight AI in this year’s elections.

California Initiative for Technology and Democracy, Common Cause Project; Sponsored package It has introduced state-level legislation that includes plans to require social media platforms to label deepfakes, which the group hopes will spur similar action nationwide.In Wisconsin, the law Just last week passed Require disclaimers for political ads that use AI in the state.

Experts say that while AI can be used for malicious purposes, the tools themselves are not necessarily harmful and could even be an advantage for campaigns crafting messages and developing content. he emphasized.

“The idea is to address the harm, not the technology,” said Matt Perrault, director of the Technology Policy Center at the University of North Carolina at Chapel Hill.

But despite the experts’ warnings, Schneidman stressed that concerns about AI should not be extrapolated to broader concerns about the U.S. election system.

“Voters should be cognizant of the emergence of generative AI and the fact that they are likely to encounter a cycle of synthetic content related to elections, which should call into question the integrity of election administration in this country. It’s not,” she said. , directs voters to local election officials for information before voting.

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News