The head of the world’s largest advertising group has been targeted in an elaborate deepfake scam using artificial intelligence voice clones. WPP CEO Mark Read detailed the attempted fraud in a recent email to management and urged other employees at the company to be wary of calls from management. I warned you.
Emails obtained by the Guardian show that the scammers used a publicly available image of Mr Read to create a WhatsApp account and use it to appear to be joined by him and another senior WPP executive. He said he set up a Microsoft Teams meeting that looked like this. During the meeting, the imposters deployed voice clones of executives and YouTube videos of executives. The scammers used the meeting’s chat window to impersonate Read off-camera. The scam, which failed, targeted “agency leaders” and asked them to set up a new business, demanding money and personal information.
“Fortunately, the attackers were not successful,” Reed wrote in an email. “We all need to be wary of techniques that go beyond email to exploit virtual meetings, AI, and deepfakes.”
A WPP spokesperson confirmed in a statement that the phishing attempt was fruitless, saying: “Thanks to the vigilance of our employees, including the executives involved, the incident was thwarted.” WPP did not respond to questions about when the attack took place or whether executives other than Reid were involved.
Previously, concerns were primarily about online harassment, pornography, and political disinformation; Number of deepfake attacks In the corporate world, this has skyrocketed over the past year. AI voice clones are fooling banks, defrauding financial companies of millions, and alarming cybersecurity departments. In a high-profile case, an executive at defunct digital media startup Ozy pleaded guilty to fraud and identity theft after reportedly using voice-disguising software. Impersonate a YouTube executive He is trying to trick Goldman Sachs into investing $40 million in 2021.
Fraud attempts at WPP similarly appeared to use generative AI for voice cloning, but were also more aggressive, such as taking a publicly available image and using it as a contact display image. Simple techniques were also included. This attack is representative of the many tools fraudsters now have at their disposal to imitate legitimate corporate communications and imitate management.
“We are seeing increasing sophistication in cyberattacks against our colleagues, particularly against senior executives,” Reed said in an email.
Reid’s email listed several bullet points to watch out for as red flags, including requests for passports, money transfers, and references to “secret acquisitions, transactions, and payments that no one knows about.”
“Just because your account has a picture of me doesn’t mean it’s me,” Reed said in an email.
WPP, a publicly traded company with a market capitalization of about $11.3 billion, also said on its website that it was dealing with fake sites using its brand name and was cooperating with relevant authorities to stop fraudulent activity.
A pop-up message on the company’s contact page says, “The names of WPP and its agents have been used fraudulently on unofficial websites and apps by third parties, who often communicate through messaging services. Please note that.”
Many companies are grappling with the boom in generative AI, directing resources to the technology while also confronting its potential harms. WPP announced last year The company said it was partnering with chipmaker Nvidia to create ads using generative AI, touting it as a game-changer for the industry.
“Generative AI is changing the world of marketing at an incredible rate. This new technology will change the way brands create content for commercial purposes,” Reid said in a statement last May.
In recent years, low-cost audio deepfake technology has become widely available and even more convincing. Some AI models can generate realistic imitations of a person’s voice using just a few minutes of easily obtained audio from celebrities, allowing scammers to create manipulated recordings of almost anyone. Become.
The rise of deepfake audio has targeted political candidates around the world, but it has also crept into other, less obvious targets. The principal of a school in Baltimore is take a vacation this year He examined an audio recording that sounded like he was making racist and anti-Semitic comments, but it turned out to be a deepfake made by one of his colleagues. The bot impersonated Joe Biden and former presidential candidate Dean Phillips.




