Hackers Used AI to Build a Zero-Day Exploit. They Had a Mass Attack Planned. Google Stopped Them First.
- High confidence — Google Threat Intelligence Group's certainty level that AI was used to build the zero-day exploit — Bloomberg
- Zero-day vulnerability — a software flaw unknown to the developer, allowing attackers to bypass two-factor authentication — Bloomberg
- Mass exploitation event planned — criminal group intended to deploy the AI-built exploit at scale before Google disrupted it — CNBC
- Not Mythos or Gemini — researchers believe the model used was a third-party AI with Mythos-like capabilities — Bloomberg
Cybersecurity experts have warned for years that AI would eventually be weaponized to build sophisticated hacking tools — not just to speed up existing techniques, but to discover and exploit vulnerabilities that human researchers hadn’t found yet. On May 11, 2026, Google announced that moment had arrived.
Google’s Threat Intelligence Group said it has “high confidence” that a criminal threat actor used an AI model to find and exploit a zero-day vulnerability — a previously unknown software flaw — in a widely-used system administration tool, creating a bypass for two-factor authentication. The group was preparing a mass exploitation event. Google disrupted it.
A criminal hacking group used an AI model to identify a zero-day vulnerability in a tool commonly used to administer computer systems. The AI-generated exploit was designed to bypass two-factor authentication — the second layer of verification (typically a text code or app prompt) that most security-conscious organizations rely on as a backstop against password theft.
The attack was not opportunistic. According to Google’s reporting, the group had staged what appeared to be a planned mass exploitation event — meaning they intended to deploy the exploit broadly and simultaneously across many targets, rather than using it surgically against a single high-value target. Google’s proactive counter-detection disrupted that plan before launch.
“It's here. This is the moment cybersecurity experts have warned about for years: malicious hackers arming themselves with AI to supercharge their ability to break into the world's computers.”
John Hultquist, chief analyst, Google Threat Intelligence Group — Fortune, May 11, 2026
In April 2026, Anthropic delayed the public rollout of its Mythos AI model, citing concerns that the model’s capabilities could be used by bad actors to identify and exploit decades-old software vulnerabilities. That decision drew significant criticism at the time — it felt hypothetical, precautionary, perhaps overly cautious.
Google’s May 11 disclosure validated Anthropic’s concern one month later. Researchers do not believe the exploit was built using Mythos or Google’s own Gemini — they believe it was built using a third-party AI model with similar capabilities. The “Mythos-like” framing in Bloomberg’s reporting refers to capability class, not the specific model. The technical barrier to building this kind of tool has clearly been crossed.
A zero-day vulnerability is a software flaw that the developer doesn’t yet know about — meaning there are zero days in which the developer has had time to patch it.
Zero-days are among the most valuable assets in the hacking economy. Nation-states pay millions for them. Criminal groups that discover them before patching happens can exploit them freely until disclosure forces a fix.
Using AI to automatically discover zero-days — a task that previously required expensive human researchers — would dramatically lower the cost of sophisticated cyberattacks and put nation-state-level capabilities in the hands of criminal organizations.
The broader significance isn’t just that one criminal group tried this and got caught. It’s that Google had to publicly disclose it at all — meaning the technique worked well enough to produce an operational exploit, meaning the capability is real, meaning other groups are almost certainly experimenting with the same approach.
The standard defensive response — patch quickly, enable MFA, segment networks — remains valid. But MFA was specifically what this exploit targeted. The entire security architecture of “password plus second factor” is now under active AI-assisted assault. Organizations running critical infrastructure, financial systems, and government networks need to treat this as a signal, not an anomaly.