Artificial Intelligence: A real scam.
AI tools are being exploited to send “hyper-personalized emails” that are so sophisticated that victims cannot tell they are fraudulent emails.
According to financial timesThe AI bot collects information about email users by “analyzing their social media activity to determine what topics they are most likely to respond to” of unsuspecting email users. Masu.
The user is then sent a fraudulent email that appears to be created by a family member or friend. Due to the personal nature of the email, the recipient cannot identify that it is actually malicious.
“Things are getting worse and more personal, which is why AI is the driving force behind a lot of it,” said Christy Kelly, chief information security officer at insurance agency Beasley. I suspect that he may be involved.”
“We're starting to see very targeted attacks that collect huge amounts of information about individuals.”
“AI makes it easier for cybercriminals to create more personalized and convincing emails and messages that appear to come from a trusted source,” the security firm said. I am. McAfee I was recently warned. “We expect these types of attacks to become increasingly sophisticated and frequent.”
While many astute Internet users know the tell-tale signs of traditional email scams, it's much more difficult to tell whether these new personalized messages are scams.
Gmail, Outlook, and Apple Mail don't yet have “adequate defenses to stop this.” forbes I will report it.
ESET Cybersecurity Advisor Jake Moore told Forbes: “Social engineering has had a huge impact on people through human interaction, but now AI can apply the same tactics from a technical perspective. “Unless people start thinking about it seriously, it's going to be harder and harder to reduce its impact.” Post less content online. ”
Bad actors can also use AI to create convincing phishing emails that mimic banks, accounts, and more. More than 90% of successful breaches start with a phishing message, according to data from the U.S. Cybersecurity and Infrastructure Security Agency and cited by the Financial Times.
Nadezhda Demidova, a cybercrime security researcher at eBay, told the Financial Times that these sophisticated scams can evade security measures, making it impossible to screen emails to detect the scams. spoke.
“The availability of generative AI tools lowers the threshold for entry into advanced cybercrime,” Demidova said.
McAfee warned that 2025 will usher in a wave of advanced AI that will be used to “create increasingly sophisticated and personalized cyber fraud.” Recent blog posts.
Software company Check Point He also made similar predictions for the new year..
“By 2025, AI will power both offense and defense,” Dr. Dorit Doll, the company's chief technology officer, said in a statement. “Security teams will rely on AI-powered tools tailored to their unique environments, while adversaries will counter with increasingly sophisticated AI-driven phishing and deepfake campaigns. .”
To protect themselves, users should not click on links in emails unless they can verify the legitimacy of the sender. Experts also recommend using two-factor authentication and a strong password or passkey to increase account security.
“Ultimately,” Moore told Forbes, “whether or not AI enhances the attacks, we will continue to remind people about these increasingly sophisticated attacks and, upon request, We need to remind people how to think twice before sending money or divulging personal information – even if the request seems unbelievable.”


