AI / Tech · Cybersecurity

Hackers Used AI to Build a Zero-Day Exploit. They Had a Mass Attack Planned. Google Stopped Them First.

Cybersecurity experts have warned for years that AI would eventually be weaponized to build sophisticated hacking tools — not just to speed up existing techniques, but to discover and exploit vulnerabilities that human researchers hadn’t found yet. On May 11, 2026, Google announced that moment had arrived.

Google’s Threat Intelligence Group said it has “high confidence” that a criminal threat actor used an AI model to find and exploit a zero-day vulnerability — a previously unknown software flaw — in a widely-used system administration tool, creating a bypass for two-factor authentication. The group was preparing a mass exploitation event. Google disrupted it.

§ 01 / What Happened

A criminal hacking group used an AI model to identify a zero-day vulnerability in a tool commonly used to administer computer systems. The AI-generated exploit was designed to bypass two-factor authentication — the second layer of verification (typically a text code or app prompt) that most security-conscious organizations rely on as a backstop against password theft.

The attack was not opportunistic. According to Google’s reporting, the group had staged what appeared to be a planned mass exploitation event — meaning they intended to deploy the exploit broadly and simultaneously across many targets, rather than using it surgically against a single high-value target. Google’s proactive counter-detection disrupted that plan before launch.

It's here. This is the moment cybersecurity experts have warned about for years: malicious hackers arming themselves with AI to supercharge their ability to break into the world's computers.

John Hultquist, chief analyst, Google Threat Intelligence Group — Fortune, May 11, 2026
§ 02 / The Mythos Connection

In April 2026, Anthropic delayed the public rollout of its Mythos AI model, citing concerns that the model’s capabilities could be used by bad actors to identify and exploit decades-old software vulnerabilities. That decision drew significant criticism at the time — it felt hypothetical, precautionary, perhaps overly cautious.

Google’s May 11 disclosure validated Anthropic’s concern one month later. Researchers do not believe the exploit was built using Mythos or Google’s own Gemini — they believe it was built using a third-party AI model with similar capabilities. The “Mythos-like” framing in Bloomberg’s reporting refers to capability class, not the specific model. The technical barrier to building this kind of tool has clearly been crossed.

What Is a Zero-Day?

A zero-day vulnerability is a software flaw that the developer doesn’t yet know about — meaning there are zero days in which the developer has had time to patch it.

Zero-days are among the most valuable assets in the hacking economy. Nation-states pay millions for them. Criminal groups that discover them before patching happens can exploit them freely until disclosure forces a fix.

Using AI to automatically discover zero-days — a task that previously required expensive human researchers — would dramatically lower the cost of sophisticated cyberattacks and put nation-state-level capabilities in the hands of criminal organizations.

§ 03 / Why This Matters

The broader significance isn’t just that one criminal group tried this and got caught. It’s that Google had to publicly disclose it at all — meaning the technique worked well enough to produce an operational exploit, meaning the capability is real, meaning other groups are almost certainly experimenting with the same approach.

The standard defensive response — patch quickly, enable MFA, segment networks — remains valid. But MFA was specifically what this exploit targeted. The entire security architecture of “password plus second factor” is now under active AI-assisted assault. Organizations running critical infrastructure, financial systems, and government networks need to treat this as a signal, not an anomaly.

Sources & Methodology · 9 Sources

Technical details sourced from Bloomberg’s reporting on Google Threat Intelligence Group disclosures (primary), confirmed by CNBC, Fortune, and UPI. John Hultquist quote cited directly from Fortune. Mythos context from Bloomberg Law and Fortune reporting on Anthropic’s April 2026 delay decision.

  1. 01Bloomberg — Google Researchers Detect First AI-Built Zero-Day Exploit in Cyberattack (May 11, 2026)
  2. 02Bloomberg Law — Google Says Hacker Used Mythos-Like AI for Software Tool Exploit
  3. 03CNBC — Google says it likely thwarted effort by hacker group to use AI for 'mass exploitation event'
  4. 04Fortune — 'It's here': Google issues dire warning after catching hackers using AI to break into computers
  5. 05Boston Globe — Google disrupts hackers using AI to exploit an unknown weakness in a company's digital defense
  6. 06UPI — Google says hackers used AI to exploit 'zero-day' flaw
  7. 07Spokesman-Review — Hackers used AI to build zero-day attack, Google researchers say
  8. 08KSAT — Google disrupts hackers using AI to exploit an unknown weakness in a company's digital defense
  9. 09Audacy / KCBS — Google disrupts hackers using AI to exploit an unknown weakness