Research from the Center for Combating Digital Hate (CCDH) has found it is easy for artificial intelligence programs to imitate the voices of politicians like President Biden or former President Trump, posing the risk of increasing voter misinformation. Released on Friday.
CCDH tests showed that the AI-powered tool used imitation voices to create convincing false statements about 80% of the time.
“The lack of guardrails around these tools and the level of skill required to use them are so low that virtually anyone can easily manipulate these platforms and generate dangerous political misinformation,” CCDH CEO Imran Ahmed said in a statement.
Fake voices have already been used to influence voters in the 2024 election. During the New Hampshire Democratic primary in February, robocalls using a fake Biden voice tried to discourage voters from turning out by urging them to stay home.
The plan was launched by Steve Cramer, who said he was inspired by the need to warn the public about the dangers of AI. Cramer was indicted last week on 13 felony counts of voter suppression and 13 misdemeanor counts of candidate impersonation, and was fined $6 million by the Federal Communications Commission.
The FCC has banned the use of AI voices on phones following the New Hampshire primary scandal, and the commission’s chairman last week moved to require television ads to disclose their use of AI.
“As artificial intelligence tools become more accessible, the Commission wants to ensure that consumers are fully informed when the technology is being used,” FCC Chair Jessica Rosenworcel said in a statement last week. “Today, I shared with my colleagues a proposal to clarify that consumers have a right to know when AI tools are being used in the political ads they see, and I look forward to them acting on this issue swiftly.”
The CCDH study found that of the six AI tools it tested (ElevenLabs, Speechify, PlayHT, Descript, Invideo AI and Veed), few had built-in safeguards to prevent the generation of political disinformation.
The group tested the tool on the voices of a number of politicians, including Biden and Trump, as well as foreign leaders such as British Chancellor Rishi Sunak and French President Emmanuel Macron.
According to the CCDH, examples of the messages generated include one in which Trump warned people not to vote due to bomb threats, another in which Biden claimed the election results had been rigged, and one in which Macron “confesses” to misusing election funds.
According to the CCDH study, only one tool, ElevenLab, was able to stop copycat statements being made using the voices of US and UK politicians.
“AI tools dramatically reduce the skill, funding and time required to create disinformation using the voices of some of the world’s most well-known and influential political leaders,” Ahmed said. “This could have devastating consequences for our democracy and elections.”
“This voice cloning technology can and will inevitably be weaponized by bad actors to deceive voters and subvert the democratic process,” he continued. “It is simply a matter of time before Russia, China, Iran, and domestic anti-democratic forces sow chaos in our elections.”
AI is “augmenting” threats to our electoral system, tech policy strategist Nicole Shneidman told The Hill in March. “Disinformation, voter suppression. What generative AI is really doing is making those threats more effective at being carried out.”
AI-generated political ads are already making inroads in the 2024 election: Last year, the Republican National Committee released an entirely AI-generated ad aimed at portraying a dystopian future under a second Biden administration, featuring fake but realistic photos of boarded-up storefronts, armored soldiers patrolling the streets, and waves of panic-inducing migrants.
The Indian election, along with a recent AI-generated video falsely portraying a Bollywood star as criticizing the prime minister, illustrates a trend that technology experts say is occurring in democratic elections around the world. The DDHC noted there have been similar attempts at election manipulation in the UK, Slovakia and Nigeria.
The issue has also prompted some in Congress to act: Sens. Amy Klobuchar (D-Minn.) and Lisa Murkowski (R-Alaska) introduced legislation earlier this year that would require disclosures similar to the FCC’s proposal when AI is used in political advertising.





