A Roblox Cheat Script.
One OAuth Token.
And Vercel Got Hacked.
The attack began with a Roblox cheat download. Sometime around February 2026, an employee at Context.ai— a small AI productivity platform that provides an “AI Office Suite” layered over Google Workspace — downloaded what appeared to be a video game exploit script. The file was Lumma Stealer, a widely-sold infostealer malware. It harvested the employee’s corporate credentials for Google Workspace, Supabase, Datadog, and Authkit — including the support@context.ai account. Those credentials became the attacker’s entry point into Vercel.
A Vercel employee had connected their Vercel enterprise Google Workspace accountto Context.ai and granted the app “Allow All” OAuth permissions. When the attacker pivoted through the stolen Context.ai credentials into that OAuth token, they gained read access to the Vercel employee’s Google Workspace — and from there, used Vercel’s own product API to enumerate and decrypt environment variables stored in plaintext across a subset of customer projects. On April 19, 2026, Vercel disclosed the breach. CEO Guillermo Rauch posted the attack chain publicly on X the same day.
A threat actor using the ShinyHunters handle appeared on BreachForums claiming to be selling a Vercel database — API keys, NPM tokens, GitHub tokens, and 580 employee records — for $2 million. Known ShinyHunters members denied involvement to BleepingComputer; attribution is disputed. What is not disputed: this was a textbook AI supply-chain attack, executed at a velocity Rauch publicly attributed to AI-augmented adversary tradecraft — and the implications extend far beyond Vercel.
- 3-Hopsupply chain escalationLumma Stealer → Context.ai employee credentials → Vercel employee OAuth token → Vercel customer environment variables
- Feb 2026initial compromiseContext.ai employee infected with Lumma Stealer — approximately 2 months before Vercel's April 19 disclosure
- $2MBreachForums asking priceThreat actor using ShinyHunters handle claimed API keys, NPM tokens, GitHub tokens, 580 employee records — ShinyHunters members denied involvement
- 0npm packages compromisedVerified in collaboration with GitHub, Microsoft, npm, and Socket — Next.js (6M weekly downloads) and Turbopack were not affected
The Vercel breach did not begin at Vercel. It began on a personal computer at Context.ai, a small AI startup whose product — the “AI Office Suite” — sits on top of Google Workspace and is designed to give enterprise users an AI layer over their email, documents, and calendar. In approximately February 2026, one of Context.ai’s employees searched for Roblox game exploits and downloaded what appeared to be an “auto-farm” cheating script.
It was Lumma Stealer— one of the most widely deployed commercial infostealers in circulation, sold as malware-as-a-service on darknet forums for a few hundred dollars per month. Roblox cheat tools and game exploit scripts are among the documented delivery channels for Lumma. The malware harvested the employee’s saved browser credentials and session cookies, giving the attacker access to:
Google Workspace: Corporate Google account credentials — the core of Context.ai’s OAuth-based product.
Supabase: Backend database credentials.
Datadog: Observability and logging platform access.
Authkit: Authentication infrastructure keys.
support@context.ai account: Customer-support address, potentially enabling privilege escalation and social engineering within Context.ai’s customer base.
The Context.ai breach would have been a contained vendor incident — had it not been for a single employee at Vercel who had connected their enterprise credentials to the platform.
Vercel is not a Context.ai enterprise customer. The company did not have a formal vendor relationship with Context.ai. What it had was a single employee who had, apparently without IT department knowledge, connected their Vercel enterprise Google Workspace accountto Context.ai and clicked “Allow All” on the OAuth permissions screen.
That single decision created a path the attacker found and exploited. Using the stolen Context.ai Google Workspace credentials, the attacker accessed the employee’s Vercel enterprise account. From there, they navigated Vercel’s internal product API — which Rauch assessed as being done with unusual velocity and “in-depth understanding of Vercel’s product API surface” — to locate and decrypt environment variables stored in plaintext across customer projects.
Step 1 — Endpoint compromise: Lumma Stealer infects Context.ai employee machine (circa February 2026). Google Workspace credentials harvested.
Step 2 — OAuth token abuse: Attacker uses stolen Context.ai credentials to locate and weaponize an OAuth token belonging to a Vercel enterprise Google Workspace account that had previously authorized Context.ai with “Allow All” permissions.
Step 3 — Vercel access: OAuth token grants attacker read access to the Vercel employee’s Google Workspace. From there, attacker uses Vercel’s product API to access internal systems.
Step 4 — Environment variable enumeration: Attacker enumerates and decrypts non-sensitive environment variables — API keys, database credentials, tokens, signing keys — stored in plaintext in customer Vercel projects.
Step 5 — Discovery and disclosure: Vercel detects the intrusion. April 19, 2026: Vercel publishes security bulletin. CEO Guillermo Rauch posts the attack chain on X.
“Here's my update to the broader community about the ongoing incident investigation. I want to give you the rundown of the situation directly. A Vercel employee got compromised via the breach of an AI platform customer called context.ai that he was using.”
Guillermo Rauch, CEO — Vercel · X · April 19, 2026
On April 19, 2026, Rauch posted the breach disclosure directly to X alongside the official Knowledge Base bulletin. The post named Context.ai by domain, described the OAuth vector, and included Rauch’s own assessment of the attacker’s tradecraft.
Here’s my update to the broader community about the ongoing incident investigation. I want to give you the rundown of the situation directly. A Vercel employee got compromised via the breach of an AI platform customer called context.ai that he was using. The details: [thread]. We assess the attacker as highly sophisticated based on their operational velocity and in-depth understanding of Vercel’s product API surface.
Rauch’s characterization of the attacker as “highly sophisticated” based on “operational velocity” is a noteworthy framing. The subtext is AI augmentation: the speed of the lateral movement through Vercel’s API surface suggested to Vercel’s team that the attacker was not operating at normal human pace. VentureBeat reported that Rauch specifically attributed the velocity to AI assistance — an early, high-profile data point in the 2026 discourse around AI-accelerated adversary tradecraft.
Vercel’s disclosure drew a specific architectural distinction that became the center of post-incident security debate: the difference between sensitive environment variables (which Vercel encrypts at rest and stores in a way that prevents programmatic reading, even from internal systems) and non-sensitive environment variables (which are stored in plaintext and are accessible once you have sufficient API access).
Non-sensitive env vars (exposed):API keys, database credentials, third-party service tokens, signing keys, and other plaintext secrets stored in customer Vercel projects that were not explicitly marked as “sensitive” by the customer at configuration time. These were enumerated and decrypted by the attacker.
Sensitive env vars (protected):Variables explicitly flagged as sensitive are stored with encryption that prevents them from being read through Vercel’s API surface, even by internal tools. Vercel: “there is currently no evidence suggesting those values were accessed.”
npm packages (not compromised):Verified in collaboration with GitHub, Microsoft, npm, and Socket. The Next.js package (approximately 6 million weekly downloads) and Turbopack were unaffected. Vercel’s open-source projects were not touched.
Scope of affected customers:Vercel described it as “a limited subset.” The company directly notified affected customers and urged immediate credential rotation. Exact numbers were not publicly disclosed.
GitGuardian pushed back on Vercel’s “non-sensitive” framing in a post-incident analysis. The argument: the label is a Vercel-side categorization, not a security assessment. API keys and database credentials are not inherently “non-sensitive.” They are sensitive; they just weren’t stored with Vercel’s higher-tier encryption because the customer didn’t configure them that way. GitGuardian’s guidance was that every exposed secret should be treated as compromised and rotated immediately, regardless of how the platform labeled it.
While Vercel was managing its incident response, a post appeared on BreachForums — a well-known marketplace for stolen data — from a user claiming to be affiliated with ShinyHunters, the extortion group previously tied to the 2024 Ticketmaster breach. The listing claimed to include:
API keys from customer Vercel projects
NPM tokens — access tokens for package publishing on the npm registry
GitHub tokens — OAuth tokens for GitHub repository access
580 employee records
Source code fragments
Asking price: $2 million USD in Bitcoin
Attribution: contested. Known ShinyHunters members told BleepingComputer they had no involvement in this incident. The ShinyHunters brand has been used by multiple, potentially unrelated actors since the group’s original campaigns. The listing’s accuracy and completeness are not independently verified.
The security researcher account @k1rallik (BuBBliK) on X was among the first to amplify the BreachForums listing publicly, posting: “VERCEL GOT HACKED — ShinyHunters is selling Vercel’s internal database for $2M on BreachForums — here’s why every developer should care: they have NPM tokens and GitHub tokens. Vercel owns Next.js — 6 million weekly downloads.”
Vercel’s verification with GitHub, Microsoft, npm, and Socket subsequently confirmed that the npm packages were not compromised — meaning if the claimed NPM and GitHub tokens were real, they were either not weaponized for supply-chain poisoning, or the incident response contained that vector before it was exploited.
The Context.ai-originated breach was not the only compromise Vercel uncovered. By April 22–24, 2026, Vercel’s forensic investigation — conducted with Google Mandiant — surfaced a second, independent compromise predating the April incident. TechCrunch reported on April 23 that this earlier intrusion involved account takeovers that occurred through a separate attack path from the Context.ai chain.
Vercel sent a second round of customer notifications covering the expanded scope. The company identified “a small number of additional accounts” compromised as part of the incident and notified affected customers directly. The exact nature of the second vector — whether social engineering, a separate malware infection, or a previously undetected credential compromise — was not publicly specified in Vercel’s disclosures as of late April 2026.
The most significant finding in the post-incident security analysis is not technical — it is governance. Vercel is a sophisticated technology company with a professional security team. It had vendor risk management processes. And yet a single employee was able to connect a third-party AI tool to their enterprise Google Workspace account with unrestricted OAuth permissions, without IT review, without a vendor security assessment, and without anyone flagging it until the breach was already underway.
“The employee's self-provisioning of a third-party AI tool with enterprise credentials — without IT knowledge — is the core governance failure the incident illustrates.”
Dark Reading — Vercel Employee’s AI Tool Access Led to Data Breach · April 2026
AI tools are designed to be frictionless.Context.ai’s product is a one-click install that requests broad Google Workspace permissions. The UX is designed to minimize friction. The security implications of “Allow All” are not surfaced to the employee in that flow.
Enterprise OAuth policies are hard to enforce.Google Workspace administrators can restrict which third-party apps are permitted to connect. But in practice, many enterprises have permissive OAuth policies because restricting them generates IT support tickets. Vercel’s internal OAuth configuration “appears to have allowed this action to grant these broad permissions,” per the incident report.
The traffic looks authorized. Push Security noted that the attack generated traffic indistinguishable from normal authorized tool behavior. Standard network monitoring and SIEM alerting would not flag an OAuth token being used by its legitimate application. The intrusion is specifically designed to operate below the visibility threshold of most security teams.
AI SaaS adoption is accelerating faster than policy. The Cloud Security Alliance classified this attack vector as a new enterprise risk category: AI SaaS tools with broad OAuth permissions creating invisible lateral movement paths into downstream infrastructure.
Obsidian Security called it “fourth-party risk”: not just that Vercel was compromised through a vendor (Context.ai), but that Vercel’s role as the deployment platform for thousands of customer applications created a second layer — the risk that Vercel’s breach could be leveraged to compromise Vercel’s customers downstream. That second-order risk is why the npm package verification was a priority in the incident response. If the attacker had found a path to inject malicious code into a widely-used Next.js release, the blast radius would have extended to every application built on that version.
Rauch’s public assessment that the attacker’s velocity suggested AI augmentation is significant because it comes from the CEO of a company that builds developer infrastructure and knows its own API surface better than any outside analyst. The claim is not that the attacker used a novel AI tool — it is that the speed of reconnaissance and lateral movement through Vercel’s systems was faster than a human expert operating alone.
Dwell time assumptions break down. Enterprise incident response processes are calibrated for attacks that unfold over days or weeks. If AI augmentation compresses the attack cycle to hours, the detection window shrinks to a point where automated response — not human analysis — is the only realistic countermeasure.
Alert thresholds are miscalibrated. SIEM rules tuned to flag slow, patient lateral movement will miss a fast AI-driven campaign. The attack velocity that looks anomalous to a human analyst looks like a service account doing its job to an alert that expects low-and-slow adversary behavior.
Recon is now nearly free. Mapping an API surface — understanding which endpoints exist, which permissions chain to what, and where environment variable storage is accessible — is a task that previously required significant attacker time and expertise. AI-assisted API recon can compress that from hours to minutes.
Vercel’s April 19 disclosure included incident-response commitments and recommended actions for affected customers. The investigation was ongoing at the time of publication; Mandiant was retained for forensic analysis; law enforcement was notified.
Customer notification: All affected customers directly notified; credential rotation urged with immediate effect.
Internal OAuth policy review: Vercel committed to reviewing and tightening its enterprise Google Workspace OAuth policies to prevent future self-provisioning of third-party tools with broad permissions.
Environment variable architecture: Vercel began reviewing how the distinction between “sensitive” and “non-sensitive” environment variables is communicated and defaulted at project configuration time. GitGuardian and others argued the default should flip — treat everything as sensitive unless actively opted down.
npm supply chain verification: GitHub, Microsoft, npm, and Socket all participated in verifying that no package code was affected.
Mandiant engagement: Ongoing forensic investigation; no public completion date announced as of late April 2026.
The recommended customer action list from Vercel, GitGuardian, and the security community: rotate all non-sensitive environment variables that were stored in plaintext; treat any API key, database credential, token, or signing key as potentially compromised if it was stored in a Vercel project that was not using sensitive variable encryption; audit Google Workspace third-party OAuth app permissions immediately; revoke any apps with “Allow All” or equivalent broad scopes that are not actively managed by IT.
The Vercel breach arrived in a year already defined by AI SaaS supply chain incidents. The Cloud Security Alliance flagged a broader pattern: the rapid, often ungoverned adoption of AI productivity tools in 2025–2026 has created a new attack surface category that sits below the visibility of most enterprise security programs.
The template:(1) Attacker targets a small AI SaaS vendor with broad OAuth permissions in enterprise environments. (2) Compromise the vendor through any vector — malware, phishing, credential stuffing. (3) Use the vendor’s OAuth footprint to pivot into enterprise customers who self-provisioned the tool without IT review. (4) Enumerate and exfiltrate credentials stored in cloud deployment platforms that the employee account has access to.
Why AI tools are the preferred vector: They are designed to request broad permissions (they need to see your email, calendar, and documents to be useful). They are adopted individually by employees who want to be more productive and do not wait for IT procurement. They generate authorized-looking OAuth traffic that is not flagged by conventional SIEM rules. And their vendor security postures are often immature relative to the permissions they hold.
The Vercel breach fit this pattern precisely:Context.ai had a Lumma Stealer infection that went undetected for approximately two months. The attacker’s use of the harvested OAuth token generated no alerts until Vercel detected anomalous API activity. By then, environment variables had already been exfiltrated.
“The Vercel breach didn't start at Vercel. It started at an AI tool nobody was watching.”
Kiteworks Substack — post-incident analysis · April 2026
The security community’s consensus guidance coming out of the Vercel incident is direct: enterprise security teams need to audit and harden their Google Workspace (and Microsoft 365) OAuth app inventories now, before a vendor they’ve never heard of gets compromised and provides the on-ramp. The attack surface is not a future risk. It is operational.
A Context.ai employee downloaded a Roblox cheat script in February 2026. It was Lumma Stealer. Two months later, an attacker used the harvested OAuth credentials to pivot through a Vercel employee’s “Allow All” Google Workspace permission into Vercel’s infrastructure — enumerating and decrypting environment variables across a limited subset of customer projects. Vercel disclosed on April 19. CEO Rauch assessed the attacker as AI-augmented. A ShinyHunters-attributed listing appeared on BreachForums for $2 million; attribution is disputed. No npm packages were compromised. A second, earlier breach was later uncovered. The technical root cause is an OAuth token with “Allow All” permissions granted to an unreviewed third-party AI tool. The governance root cause is that the AI SaaS tool was self-provisioned by one employee, without IT review, without a vendor security assessment — exactly as thousands of employees do every day across enterprises whose security teams have no visibility into it.
Tier 1: Vercel’s own official security bulletin and CEO Guillermo Rauch’s public statement on X. Tier 2: CyberScoop, The Hacker News, BleepingComputer, TechCrunch, SecurityWeek, VentureBeat, Dark Reading. Tier 3: Vendor security research (Trend Micro, OX Security, Push Security, Obsidian Security, reco.ai, GitGuardian, CSA, InfoStealers). Where scope claims differ between Vercel’s official disclosure and threat-actor claims on BreachForums, Vercel’s verified disclosure is used in body text. ShinyHunters attribution is contested — known group members denied involvement to BleepingComputer; the handle is used in this article with that caveat stated. No CVE numbers have been assigned for this incident. The Mandiant investigation was ongoing as of late April 2026; findings may be updated.