Sometime in February 2026, a developer at Context AI downloaded Roblox cheat software. Game exploit files. The kind of thing that, from a security standpoint, you already know where this is going.
The download installed Lumma Stealer on their machine. It swept through stored credentials: Google Workspace login details, keys for Supabase, Datadog, Authkit, and the company's support email account. All of it left the machine and landed in attacker hands.
Two months later, Vercel was confirming a breach of its internal systems to the press. Customer data was listed for sale on BreachForums at $2 million.
The path between those two events is the story of what happens when an employee connects an AI productivity tool to a corporate account with maximum permissions, nobody notices, and then the tool's own security fails.
How the breach unfolded
Context AI built an AI Office Suite product, a consumer app with a Chrome extension that let users connect their Google Drive. The extension had a straightforward onboarding flow: connect your Google account, grant access, and the tool could search and use your documents. The permission it requested was broad. Users who completed the onboarding granted Context AI full read access to everything in their Google Drive.
At some point, a Vercel employee signed up for Context AI using their Vercel enterprise Google account. This was not a formal company procurement decision. Context AI confirmed in their security update that Vercel was not a Context customer at the corporate level. At least one Vercel employee had signed up individually, connected their work account, and clicked through the permissions screen. They selected "Allow All."
Vercel's internal OAuth configurations allowed that action to grant broad permissions within Vercel's enterprise Google Workspace. That is the detail that turned a personal productivity choice into a corporate security incident.
When attackers compromised Context AI's infrastructure using the credentials stolen from that infected employee's machine, they found OAuth tokens for Context AI's consumer users. One of those tokens connected to a Vercel enterprise account. They used it to take over that employee's Vercel Google Workspace account. From there, they accessed internal Vercel environments and environment variables that were not marked as "sensitive" and therefore stored unencrypted. Those variables gave them a foothold to move further into Vercel's infrastructure.
According to the threat actor's post on BreachForums, what they obtained included a Vercel database access key and portions of source code. The asking price was $2 million.
What Vercel has confirmed
Vercel published a security bulletin disclosing the incident and has been working with Google's Mandiant and other cybersecurity firms. They described the threat actor as "sophisticated" based on their operational velocity and detailed understanding of Vercel's systems. They have notified a limited subset of customers whose credentials were compromised and are urging immediate credential rotation.
Vercel CEO Guillermo Rauch confirmed on X that Next.js, Turbopack, and Vercel's open source projects remain safe. The company has also rolled out new dashboard capabilities including an improved environment variables overview and better tooling for sensitive variable management.
But the breach itself may affect hundreds of users across many organizations, not just Vercel's direct systems, with potential downstream exposure reaching further into the tech ecosystem through compromised credentials.
Vercel has advised Google Workspace administrators to check for the following OAuth application:
110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
And the Context AI Chrome Extension ID:
omddlmnhcofjbnbflmjginpjjblphbgk
The entry point nobody was watching
Read the attack chain in order and the technical steps are interesting. The Lumma Stealer infection at Context AI, the OAuth token pivot, the access to non-sensitive but still exploitable environment variables. Researchers at OX Security have mapped the full incident flow in detail.
But the moment the entire chain actually started was not technical. It was a Vercel employee using a personal AI productivity tool with their corporate account, granting it full access to their work Google Drive, through a consumer app the company had not vetted or approved.
That specific moment — an employee connecting an unauthorized AI tool to enterprise infrastructure with broad permissions — is a security event that most organizations would have no visibility into. It does not show up in a vulnerability scan. It does not trigger a firewall alert. It happens through a perfectly ordinary-looking OAuth consent screen on a Wednesday afternoon, and then it sits there as a latent exposure until something else goes wrong.
In this case, what went wrong was a compromised third-party vendor. But the real exposure existed before any of that.
The shadow AI problem is structural, not accidental
The term "shadow IT" has been around for decades. Employees use tools their company has not approved because those tools are useful, fast, and nobody said they couldn't. Shadow AI is the same phenomenon with higher stakes.
AI productivity tools are proliferating faster than any compliance or procurement process can track them. They are often built around broad data access, because that is what makes them useful: connect your email, your calendar, your documents, and the AI can help you synthesize across all of it. The OAuth permissions that enable this capability are the same ones that become a liability when the vendor experiences a breach, a credential theft, or a compromise of their own infrastructure.
The Vercel employee who signed up for Context AI was almost certainly not thinking about supply chain risk. They found a useful tool, connected it to their work account, and moved on. This is not unusual behavior. In most organizations, it is effectively the norm.
Security Affairs noted that this breach fits a pattern of systematic attacks against organizations through their AI tool supply chains. The pattern typically involves: a vendor or tool employee becomes a malware victim through personal activity, their credentials expose the vendor's OAuth infrastructure, and attackers use that access to pivot into enterprise systems connected through individual employee accounts. The Vercel attack is not an isolated incident. It is a template that works because the attack surface it exploits — individual employees connecting AI tools to corporate accounts — keeps growing.
Why AI tools make this attack surface distinctly dangerous
When a developer connects a generic productivity app to their work Google account, the typical risk is that the app's vendor might misuse their data. The newer risk is that the app's vendor is itself a target, and the OAuth token an employee granted to that vendor in a consumer context is now a vector into enterprise infrastructure.
AI tools compound this in several ways.
First, they are designed for broad access. A note-taking app might need to read one folder. An AI Office Suite that synthesizes across your documents, emails, and calendar needs read access across all of them. The surface area of what an attacker can reach through a compromised AI tool OAuth token is larger than through most other categories of consumer software.
Second, they are updated frequently and at the consumer pace, not the enterprise security review pace. The Context AI Chrome extension that connected to user Google Drives was removed on March 27, 2026. Between when it launched and when it was removed, an unknown number of enterprise employees had connected work accounts to it.
Third, they are often invisible to security teams. Most organizations have some visibility into corporate SaaS procurement. They have almost no visibility into consumer AI tools individual employees connect to work accounts on their own initiative. There is no ticket, no approval workflow, no record. There is just an OAuth connection sitting on a user's Google account page.
What "Allow All" actually means in 2026
Context AI's security update acknowledged that Vercel's breach traced back to an employee who had connected their Vercel enterprise account to Context AI's consumer product and "granted 'Allow All' permissions." They noted that "Vercel's internal OAuth configurations appear to have allowed this action to grant these broad permissions in Vercel's enterprise Google Workspace."
Neither party is uniquely at fault here. Context AI's consumer product asked for broad access because it needed broad access to be useful. The Vercel employee granted it because there was nothing preventing them from doing so. Vercel's OAuth configuration allowed a consumer app connection to carry enterprise-level permissions.
This is not a story about a negligent employee or a poorly built product. It is a story about three reasonable decisions that combined into an unreasonable exposure. The employee wanted a useful tool. The vendor built a product that needed data access to function. The enterprise had not thought carefully about what consumer OAuth connections to work accounts actually mean in their environment.
That combination is replicated across thousands of organizations right now, across dozens of AI tools.
What enterprises need to address
The Vercel breach makes a few things concrete that have been theoretical for most security teams.
OAuth connections from consumer AI tools into enterprise accounts are a real, exploitable attack vector. They do not require the enterprise to be a direct customer of the tool. A single employee connecting a personal account to a vendor can expose enterprise infrastructure if that vendor is later compromised.
Most enterprises have no inventory of these connections. They do not know which AI tools their employees have connected to corporate accounts, what permissions were granted, or which of those tools have since been compromised. Building that visibility is not straightforward, but the absence of it means the exposure is also invisible.
Policy alone is insufficient. Telling employees not to connect unauthorized AI tools to their work accounts addresses the symptom but not the structural issue. Employees use AI tools because they are productive and useful. The answer is not to forbid usage but to have visibility into what is being used, to govern how those connections are made, and to detect when new third-party AI applications are granted access to enterprise infrastructure.
The Vercel incident also reinforces that sensitive environment variables deserve more careful handling than non-sensitive ones. Not because the non-sensitive variables caused the breach directly, but because they provided the stepping stone. Marking a variable as non-sensitive is an implicit claim that its compromise would not cause harm. In practice, environment variables contain enough context about infrastructure topology that even nominally non-sensitive ones can help an attacker navigate.
Where the AI tool risk meets enterprise governance
This is the category of risk that security teams consistently rank low because it looks like a user behavior problem rather than a security infrastructure problem. The Vercel breach is a useful corrective to that framing.
The question is not whether employees will use AI tools. They will. The question is whether the organization can see what tools are being used, which corporate accounts and data sources have been connected to those tools, and whether any of those third-party tools have been compromised.
SuperAlign was built specifically for this visibility gap. As AI tools multiply across organizations, the ability to detect all AI interactions across your network — including unauthorized and shadow AI usage — monitor what third-party AI tools employees have connected to corporate systems, and enforce policy on those connections before they become incident reports becomes the operational foundation of AI security. SuperAlign Radar provides real-time detection of AI tool usage across enterprise environments, identifies unauthorized external AI applications connecting to internal systems, and gives security teams the signal they need to act before a vendor compromise becomes their breach.
The Vercel employee who connected Context AI to their Vercel enterprise account was not the problem. The absence of any system that could see that connection, flag the broad permissions granted, and prompt a review was the problem. That is a solvable problem. The question is whether it gets solved before or after the next breach like this one.
What to check now
If your organization uses Vercel or Context AI, the immediate remediation steps are:
Rotate keys and credentials for any Vercel environment variables not marked as sensitive. Check your Google Workspace admin console for the Context AI OAuth application ID (110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com). Check individual employee Google accounts at myaccount.google.com/connections for any Context AI connections. Check for the Context AI Chrome Extension (omddlmnhcofjbnbflmjginpjjblphbgk) across managed devices.
More broadly: audit your Google Workspace for all third-party OAuth applications connected by employees, especially those granted broad permissions to Drive or email. Most organizations have dozens of these they are unaware of. The ones connected to AI tools deserve particular attention.
