SELECT LANGUAGE BELOW

How AI chatbots are assisting hackers in aiming for your bank accounts

How AI chatbots are assisting hackers in aiming for your bank accounts

AI chatbots are increasingly popular for online interactions. Instead of sifting through multiple links, users can pose a question and receive a direct response. However, there’s a significant downside—these tools sometimes deliver incorrect information, posing security risks. Cybersecurity experts caution about hackers who exploit these vulnerabilities, launching AI-driven phishing attacks.

When users query AI for login pages, especially for banks and major tech services, they might receive misleading links. Clicking on these can redirect them to counterfeit websites designed to steal personal information and login credentials.

Key insights on AI phishing attacks

Recent tests by Netcraft looked into the performance of AI models like GPT-4.1, which powers various AI tools. They asked about login URLs for 50 different businesses across banking, retail, and tech sectors. Of the 131 unique links generated, only about two-thirds were accurate. Alarmingly, around 30% led to unregistered or inactive domains, and about 5% linked to irrelevant sites. This means that many responses directed users to dangerous or fake endpoints.

If attackers gain control over these unregistered domains, they can create convincing phishing sites, capitalizing on the trust users place in AI-generated answers, which often seem legitimate. This can increase the risk of individuals unknowingly navigating to these malicious sites.

A real-world example of AI phishing

In a recent incident, a user asked Perplexity AI for Wells Fargo’s login page, but the leading results pointed to a phishing site rather than the official bank website. This imitation page closely resembled the original, coaxing users into entering their sensitive details. While genuine links were available further down, many users failed to recognize the need to verify the authenticity of the link.

This situation highlights a broader issue not limited to a specific AI model. It stems from the misuse of legitimate platforms and the insufficient vetting of AI-generated outputs. The consequence? A trusted AI tool may inadvertently misdirect users to harmful financial websites.

Furthermore, smaller banks and local credit unions often face heightened vulnerabilities. Since they might not be well represented in AI training data, the risk of AI providing fabricated or incorrect links increases, potentially leading users to unsafe locations.

How to shield yourself from AI phishing attacks

As phishing scams continue to evolve, adopting sensible habits can enhance your security. Here are seven tips that can help:

1) Be cautious with AI-generated links

AI chatbots can sound certain, even when wrong. If an AI suggests a login link, it’s wiser to manually enter the URL or use a saved bookmark instead of clicking right away.

2) Verify the domain name

Phishing links frequently use similar-sounding domains. Look for subtle mistakes, additional words, or uncharacteristic endings, like “.site” or “.info” instead of “.com”. If something feels off, proceed with caution.

3) Enable two-factor authentication (2FA)

2FA adds an extra security layer, and it’s particularly effective even if your username and password are compromised. If available, opt for app-based verification rather than SMS codes.

4) Avoid logging in through search engines or AI tools

When accessing sensitive accounts, it’s best to bypass searches or AI queries. Manually entering the official URL or relying on bookmarks is safer, as AI may inadvertently display phishing pages.

5) Report suspicious AI-generated links

If a chatbot provides a questionable link, report it. Feedback mechanisms help improve AI systems and can protect others from similar dangers.

6) Keep your browser updated and use robust antivirus software

Modern web browsers typically offer phishing and malware protection features. Make sure these are enabled, and consider installing reliable antivirus software to enhance your defenses against malicious links.

7) Utilize a password manager

Password managers not only create strong passwords but also help identify fake websites by not auto-filling login information on suspicious pages.

As threats evolve, maintaining a cautious approach is critical. Always verify URLs before entering sensitive information. AI can generate inaccurate responses, sometimes leading to harmful consequences. It’s essential to remain vigilant and critical about what you encounter online.

Should more be done by AI developers to mitigate phishing risks? Feel free to share your thoughts.

Facebook
Twitter
LinkedIn
Reddit
Telegram
WhatsApp

Related News