Anyone with a cell phone knows that robocalls are a common nuisance. But these days, that robocall can be generated by artificial intelligence (AI), and the voice on the other end can sound like this: president of the united states.
As AI technology continues to improve and our daily lives change, this type of fraud will become more common. And that scope will extend beyond a phone call from an AI “Joe Biden.” In fact, something similar had already happened with a political action committee affiliated with Florida Governor Ron DeSantis. AI used To recreate Donald Trump’s voice in an attack ad.
In response to such incidents, many people call out The Federal Communications Commission (FCC) was the first agency to respond to the call.
F.C.C. Confirmed In early February, it announced that using AI-generated voices in robocalls is illegal. This applies to all forms of robocall fraud, including those related to elections and campaigns. Much of the media coverage surrounding this decision focused on the agency’s actions to:Ban” or “outlaw” The use of AI in robocalls, in practice decision It was simply confirmation that federal regulations already exist that apply to AI calls.
But as anyone with a phone knows, unwanted robocalls are a common problem despite the FCC’s regulations, and this ruling will increase the amount of AI-assisted robocalls voters will receive in the coming year. There is little evidence to suggest that it will decrease. In fact, unwanted robocalls have been illegal since 1991, when Congress passed the bill. Telephone Consumer Protection Act (TCPA) prohibited calls using “artificial or prerecorded voices” without the consent of the call recipient.
As a result, Americans must prepare to protect themselves from bad actors who use robocalls to spread election misinformation.
It’s also a reminder that government regulations are ineffective at stopping bad actors from taking advantage of the public. Over the past five years, Americans, on average, 50 billion robocalls Those that violate the TCPA occur annually.There are various explanation There are many reasons why bans are ineffective, ranging from a lack of enforcement authority to outdated definitions, but more importantly, bans imposed by governments rarely work As expected. Policymakers at all levels of government are unlikely to change this dynamic. respond to public pressure Do something about AI in elections by making new proposals Prohibited or restrictedRather than trying to make existing regulations work as intended.
If government regulations fail to stem the tide of robocalls this election season, the burden will fall on all of us to remain ever-vigilant to protect ourselves from attempts to deceive. Extensive media coverage of AI this election cycle has led to increased public awareness, and technology companies are committed to helping disseminate information about AI as part of their own efforts. recently announced agreements To combat the deceptive use of AI in elections.
It is these light-touch efforts that hold the most promise for empowering individual voters and combating efforts to use AI to disrupt the 2024 election.
Thanks to AI, Joe Biden’s next robocall or Donald Trump’s next deepfake will be just a press of a key away. For voters, the first line of defense against these attempts at deception is to assume that government efforts to protect us from them will fail.
Chris McIsaac is a Fellow in the Governance Program at the R Street Institute..
Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.





