SELECT LANGUAGE BELOW

AI voice scammers are posing as loved ones to steal your money — here’s a foolproof trick to stop attacks

Here’s how to thwart a real-life clone attack.

The light-speed advances in artificial intelligence have made people easy prey for increasingly prevalent AI voice fraud. Fortunately, tech experts have revealed a sure-fire way to tell humans from these digital phone clones. It’s about asking for a safe word.

Hany Farid, a professor at the University of California, Berkeley, said: “I like the idea of ​​a code word because it’s simple and can’t be easily subverted if the caller has the clarity of mind to remember to ask the question.” ” he says.fakereported Scientific American.

“Right now, there’s no other obvious way to know that the person you’re talking to is who they say they are,” said Hany Farid, an audio deepfake expert at the University of California, Berkeley. Chee Xiong Te – Stock.adobe.com

Given the plethora of AI phone scams in which cybercriminals use cheap AI tools to parrot the voices of family members and trick people into providing bank account numbers and other valuable information. , stopping AI has become paramount.

The audio is often reconstructed from the shortest audio bytes collected from the victim’s social media videos.

The most notorious of these AI spoofing schemes is “caller ID spoofing.” Telephone scammers hold recipients’ relatives hostage and threaten to harm them unless they pay a specified amount.

This cybernetic catfishing software is so cutting edge that the AI’s voice is often indistinguishable from that of your loved one.

“I never knew for a second it was her,” said Arizona mother Jennifer DeStefano, recalling the bone-chilling incident in which a cyber fraudster cloned her daughter’s voice to demand a $1 million ransom. I had no doubts,” he said.

To avoid these family intruders, instead of using a bot-sniffing dog, you should flip a script on your computer to prompt for a password.

This comes amid a rise in AI cloning scams, where criminals are using the technology to imitate family members and request money over the phone. Getty Images/iStockphoto

Experts at Scientific American suggest coming up with a special safe word or private phrase that only your family knows and sharing it directly with each other.

If you receive a call asking for money in an emergency, the person on the other end can ask for this offline password, allowing you to spot potential lies.

“At this time, there is no other obvious way to know if the person you are talking to is who they say they are,” Farid said, adding that families should be given regular “safe word” pop quizzes to help them remember. I’m giving advice.

Who would have thought that an advanced AI could be fooled just like the older brother who blocks your hallway and demands your password?

These AI clones can use audio bites from social media videos to mimic the voices of family members to match the T-shirts. Andreus K – Stock.adobe.com

Experts compared this method to the safe words parents teach their children to prevent kidnappers posing as friends from picking them up from school.

They claim that humans can even capture AI posters using codes learned in childhood.

Of course, safe words are not the only way to detect clones in sheep’s clothing.

Other red flags include unexpected phone calls requesting financial action, annoying and artificial-sounding background noise that appears to be on a loop, and inconsistencies in the conversation.

“Voice cloning technology often struggles to create consistent and contextually accurate conversations,” says the digital payments solutions company Take payments warn. “If ‘that person’ contradicts themselves, provides information that doesn’t match what you know, or evades direct questions, that’s cause for concern.”

According to the site, cyber fraudsters often request payment in cryptocurrencies because it is “impossible to trace the identity of the recipient.”

“Requests for funds via popular digital currencies such as Bitcoin or Ethereum should be treated as highly suspicious,” they wrote.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News