Taylor Swift's endorsement of Vice President Harris has shone a bright spotlight on artificial intelligence (AI) deepfakes, a fear that Swift said prompted her to take a public stance in the presidential race.
Swift, 34, formally endorsed Harris shortly after Tuesday night's debate, citing concerns about rapidly developing AI technology and its power to deceive. She specifically noted that former President Trump shared several fake images of her and her fans last month and claimed to have her endorsement.
“It really brought home my fears about AI and the dangers of spreading misinformation. It has led me to the conclusion that as a voter, I need to be very transparent about my actual plans for this election. The easiest way to fight misinformation is to tell the truth.” She wrote in her recommendation:.
Experts say the admission by one of the world's most recognizable superstars highlights broader concerns voters and public figures have about AI and its impact on the 2024 election.
Trump's sharing of the fake image of Swift isn't the first time he's encountered AI-generated content.
Earlier this year, fake sexually explicit images of Swift circulated online, sparking renewed calls from federal lawmakers for social media companies to support policing AI-generated material. At the time, social platform X temporarily blocked searches for Swift to combat the spread of the fake images.
“She has experienced two of the most extensive combinations of [AI]”You know, deepfakes of intimate images and now election-related deepfakes just highlight that no one is immune,” said Lisa Gilbert, co-director of the nonprofit Public Citizen, a progressive consumer rights watchdog group.
Gilbert told The Hill that the pop star “appreciates the enormous harm” caused by AI spreading misinformation, including in elections, and called for federal regulation to address the issue.
Experts suggested the singer's massive endorsement could be a unique opportunity to raise awareness of the spread of misinformation ahead of the November election.
Swift's support for Harris, and her stance on AI, was broadcast to her roughly 284 million Instagram followers. At least in the top 15 The most followed person on the social media platform.
“Taylor Swift has such a large platform that I think her visibility here will be powerful, at least as a starting point,” Julia Fiehler, a digital literacy expert at Virginia Tech, told The Hill.
“If it's not really on your radar that what you're looking at might be generated content, just knowing that it's a possibility goes a long way and makes a difference when you see something and you go, 'Hmm, I don't know about that.' I think a lot of people who saw what she said will remember that the next time they see something.”
Fiehler noted that the technology is still new to many users, and Swift's support is a strong call for people to exercise caution when viewing content that may already align with their biases.
Swift's emphasis on AI in her endorsement statement may also bring more attention to the issue: mentioning AI in the second of five paragraphs “gives weight and value to the ethics of online political advertising” and brings “well-deserved attention” to cyber abuse, Laurel Cook, an associate professor of marketing at West Virginia University and founder of the Social Technology Research Lab, told The Hill.
“As the debate progresses, the potential benefits of inappropriate use of digitally altered content fades away — in other words, stealing other people's images and likenesses will soon no longer be profitable,” Cook said.
Ashley Spillane, founder of the Civic Responsibility Project, agreed there was a purpose behind Swift's message.
“I think she's very thoughtful and skilled in how she communicates with the community. I think she's really been very careful with her communications to make sure they resonate with the community,” Spillane said. Recent Harvard University Study On the influence of celebrity endorsements.
Swift's concerns join those of other Hollywood stars who have been public about what they feel are insufficient safeguards surrounding the rapidly developing technology. In June, actress Scarlett Johansson said she was “shocked” to learn that OpenAI's Chat GPT had deployed an AI assistant that she claimed sounded “eerily similar” to her voice.
Some experts say celebrities, unlike most of the general public, have the advantage of having platforms that can quickly expose deepfake imagery and AI-generated content.
“Taylor Swift has such a large megaphone that she is able to protect herself to a certain extent by communicating publicly with her followers and others in a way that not everyone can,” said Jennifer Rothman, a law professor at the University of Pennsylvania. New Jersey teen victim The actress behind the sexually explicit deepfake has become a hot topic after testifying before Congress.
Rothman, who specializes in intellectual property law, said there are already a variety of laws in place regarding the unauthorized use of a person's name, voice and likeness, as well as laws directly targeting fake intimate images, but he explained that these laws can be “very difficult” to understand for people who don't have access to the right resources.
“Federal legislation specifically targeting these digital replicas has been considered and proposed, and various approaches are being considered,” Rothman said. “I believe these uses are covered under current law, but for a variety of legal reasons it would be nice to have some federal legislation that addresses something.”
She stressed that Congress's legislation on AI-generated content is still in its “early stages” and that lawmakers need to avoid taking steps that “make things worse.”
“It would be very problematic for a federal law to complicate things or suggest that under federal law, someone other than the owner of the voice or likeness can own that voice or likeness. And those are some of the ideas that have been proposed. So if there is a law like that, and we don't want that, but a more targeted law would make it easier and more efficient.”
While concerns about AI's influence on elections appear to be growing, some experts believe the technology is not yet advanced enough to convince many voters ahead of November.
“The quality of some of the AI-generated content is not going to be compelling enough for at least the next few months,” said Clara Langevin, an AI policy expert at the Federation of American Scientists, “but we could get to a point, like the next election cycle, where it becomes hard to tell this AI from other AI.”





