Google announced on Monday that it successfully thwarted a criminal group’s attempt to exploit a previously unknown digital weakness using artificial intelligence. This news has raised concerns about AI’s role in cybersecurity among both government and private sectors.
While details about the attackers and their target were limited, John Hultquist, the chief analyst for Google’s threat intelligence division, emphasized that this incident marks a pivotal moment that experts have long anticipated—malicious hackers harnessing AI to enhance their capabilities. “It’s here,” he stated, acknowledging that the era of AI-driven vulnerabilities has already begun.
This comes amid significant advancements in AI’s ability to identify such vulnerabilities, including a model called Mythos, which was recently introduced by Anthropic. Notably, the Trump administration has changed its approach regarding AI oversight, particularly after rolling back certain regulations imposed by the Biden administration.
Dean Ball, a senior fellow at the Foundation for American Innovation and former White House tech policy advisor, pointed out the mixed signals coming from the current administration about whether a regulatory response is necessary. “Some people don’t want there to be a regulatory response to this and others do,” he noted, adding his personal preference against regulation but acknowledging the necessity in this instance.
Google’s Findings in the Cyberattack
Google reported observing a group of prominent hackers planning a significant operation based on a vulnerability they had discovered. This allowed them to bypass two-factor authentication to access a well-known system administration tool, which Google chose not to disclose.
The company categorized this as a zero-day exploit, meaning the flaw was unknown to security engineers prior to the attack, leaving them with no time to develop a remedy. Google managed to alert the affected company and interrupt the hackers’ plans before any damage occurred. During this process, they determined that an AI large language model—similar to those used in popular chatbots—was utilized to uncover the vulnerability.
Google did not specify which AI model was involved but suggested it was likely not their own Gemini or Anthropic’s Claude Mythos. They also did not disclose the group they suspected but did mention that there was no indication of involvement from any hostile government, although groups associated with China and North Korea have been looking into comparable methods.
Hultquist pointed out that criminal hackers could benefit significantly from AI’s rapid capabilities compared to government spies, who typically operate more discreetly. “There’s a race between you and them to stop them… AI is going to be a huge advantage because they can move a lot faster,” he concluded in an interview.
Recent Developments and Concerns
The Trump administration’s Commerce Department recently announced new agreements with major tech companies, including Google and Microsoft, to assess their most potent AI models prior to public release, though this announcement has since been removed from their website.
This lack of clarity follows Anthropic’s launch of Mythos, a model deemed so powerful for hacking and cybersecurity that its release has been limited to a select group of trusted organizations. In response, Anthropic created Project Glasswing, uniting tech leaders like Google and Microsoft along with financial institutions such as JPMorgan Chase to secure critical software from potential threats posed by advanced AI models.
Yet, Anthropic’s relationship with the U.S. government appears complicated, particularly amidst legal disputes regarding the military applications of its technology. OpenAI, Anthropic’s key competitor, has also released a specialized version of ChatGPT aimed specifically at helping defenders secure critical infrastructure.
Ball expressed optimism that, in the long term, effective AI tools could enhance safety against routine cyber threats. However, he acknowledged the extensive software code underpinning computing systems remains vulnerable if AI technologies are harnessed for exploitation. He believes strengthening this software could take years and would benefit significantly from coordinated efforts by the government.
For now, Ball foresees a “transitional period” where cybersecurity risks may escalate, potentially making the world a riskier place.




