SELECT LANGUAGE BELOW

CEO of WPP falls victim to deepfake scam

WPP’s CEO became the victim of an elaborate deepfake scam in which his boss’ voice was cloned to solicit money and personal information from employees.

Mark Read, CEO of WPP, a London-based communications and advertising company whose clients include Dell, Wendy’s, Victoria’s Secret and Coca-Cola, has a WhatsApp account that appears to be his own. Witnessed his voice being duplicated and his likeness stolen by a fraudster who created .

They used a publicly available photo of Read as their profile picture to deceive other users, according to an email sent to WPP executives describing the scam. Previously reviewed by the Guardian.

WPP CEO Mark Reid’s voice and likeness were stolen as part of an elaborate deepfake scam designed to trick fellow leaders at the advertising giant into handing over personal information and funds. Reuters

The WhatsApp account was used to set up a Microsoft Teams meeting with another WPP executive.

During the meeting, the scammers deployed fake videos generated by Reed’s artificial intelligence (also known as “deepfakes”) that included voice clones.

According to the Guardian, they also used the meeting’s chat function to impersonate Mr Reid and target WPP’s “agency leaders” with a market capitalization of around $11.3 billion, demanding that they hand over money and other personal information. It is said that

“Fortunately the attackers were not successful,” Reid wrote in an email obtained by the Guardian.

“We all need to be wary of techniques that go beyond email to exploit virtual meetings, AI, and deepfakes.”

A WPP spokesperson confirmed to the Post that the attempt to deceive company executives was unsuccessful.

A company representative added, “The incident could have been prevented thanks to the vigilance of our employees, including the executives involved.”

The scammers reportedly used Read’s photo to set up a WhatsApp account, which they then used to create a Microsoft Teams account and impersonate Read to communicate with other WPP leaders. That’s it. DIY13 – Stock.adobe.com

It was not immediately clear whether other WPP executives were involved in the plot or when the attempted attack took place.

A WPP spokesperson declined to provide further details about the fraud.

According to the Guardian, in the email Mr Reid cited the myriad ways in which criminals impersonate real people, noting that “cyber attacks against colleagues, particularly senior executives, are becoming increasingly sophisticated. “I’m seeing it,” he added.

Mr. Reed’s email included several bullet points advising recipients to watch out for red flags, including references to requests for passports, money transfers, and “secret acquisitions, transactions, and payments that no one knows about.” was.

WPP, a London-based communications and advertising company whose clients include Dell, Wendy’s, Victoria’s Secret and Coca-Cola, confirmed to the Post that scammers were unsuccessful in defrauding executives. . AFP (via Getty Images)

“Just because there’s a picture of me on your account doesn’t mean it’s me,” Reid said in an email, according to the Guardian.

The Post reached out to WPP for comment, but the contact’s landing page included a notice that “the company’s name and the names of its agents are being used fraudulently by a third party.”

As deepfake images become a hotly debated topic among AI companies, deepfake audio is also on the rise.

Google has recently distanced itself from the dark side of AI, cracking down on the creation of deepfakes (most of which are pornographic) as “terrible,” while ChatGPT maker OpenAI has announced that users can create AI-generated deepfakes. It is reportedly considering allowing the creation of pornographic and other explicit content using its technological tools.

However, deepfakes, such as graphic nude images of Taylor Swift, will be prohibited.

Deepfakes primarily contain fake pornographic images and have affected celebrities such as Taylor Swift, Bella Hadid, and US Rep. Alexandria Ocasio-Cortez. AFP (via Getty Images)

The company, run by Sam Altman, said it is “considering whether we can responsibly provide the ability to generate NSFW (not safe for work) content in age-appropriate contexts.”

“We look forward to gaining a deeper understanding of user and societal expectations for model behavior in this space,” OpenAI added, adding that examples include “erotica, extreme gore, defamation, and unsolicited profanity.” ” may be included.

OpenAI’s foray into creating fake adult content comes just months after announcing revolutionary new software called Sora that can generate high-quality videos in response to a few simple text queries That’s for later.

The technology represents a significant advance by the ChatGPT makers and could also take concerns about deepfakes and the plagiarism of licensed content to a new level.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News