Researchers at security firm Sentinelone have discovered a massive spam campaign that leverages Openai chatbots to generate unique messages, bypass spam filters, and infiltrates more than 80,000 websites over four months. Once again, the con artist is in the Vanguard, who misuses AI for illegal profits.
Ars Technica Report Researchers at Sentinelone's Sentinelabs revealed that spammers have used Openai's chatbot to launch a massive spam campaign targeting over 80,000 websites. Survey results, Published Wednesday's blog post shed light on how the same features that make large-scale language models (LLMs) valuable for legitimate purposes can be used equally easily to exploit malicious activities.
The spam campaign, tailored by a framework called Akirabot, is intended to promote suspicious search engine optimization (SEO) services for small and medium-sized websites. By leveraging OpenAI's chat API, tied to the GPT-4O-MINI model, Akirabot generated unique messages tailored to each target website, effectively avoiding spam detection filters that block the same content that Masse normally sends.
To achieve this, Akirabot has assigned Openai's chat API the role of “helper assistant generating marketing messages.” I was then provided a prompt to tell LLM to replace the variable with the site name at runtime. As a result, each message body contained the recipient's website name and a brief description of its services, creating an illusion of curated messages.
Sentinellabs researchers Alex Delamotte and Jim Walter highlighted the new challenges raised when AI defends spam attacks. They pointed out that the spinning set of domains used to promote SEO products is the easiest indicator, as the content of spam messages no longer follows the consistent approach like previous campaigns.
The size of the campaign was revealed through log files left by Akirabot on the server, tracking success and failure rates. The data showed that unique messages were successfully delivered to over 80,000 websites between September 2024 and January 2025. In contrast, messages targeting approximately 11,000 domains failed.
Openai acknowledged the researcher's findings and reiterated that such use of the chatbot violated the terms of use. The company cancelled its Spammers account when it received disclosures from Sentinellabs. However, the fact that this activity was not noticed for four months emphasizes the reactive nature of enforcement rather than aggressive measures to prevent abuse.
Please read more Find Ars Technica here.
Lucas Nolan is a reporter for Breitbart News, which covers the issues of freedom of speech and online censorship.





