When AI Agents Have Privileged Access: The BodySnatcher Vulnerability Exposes a Critical Design Flaw
Jan 20, 2026
ServiceNow's Now Assist AI Agents promised something compelling: employees could interact with AI agents through familiar chat interfaces to handle routine tasks, automate processes, and access information without wrestling with complex systems. For large enterprises managing hundreds of thousands of users and massive data volumes, that efficiency gain is genuinely valuable.
The appeal drove rapid adoption. Nearly half of Fortune 100 companies now rely on ServiceNow's Now Assist and Virtual Agent applications. These platforms have become critical infrastructure for how organizations operate.
However, security researchers recently discovered something troubling. A vulnerability called BodySnatcher (CVE-2025-12420) allows an unauthenticated attacker with just a target's email address to impersonate any ServiceNow user. They can bypass Multi-Factor Authentication (MFA) and Single Sign-On (SSO). They can execute AI agents with that person's privileges. The researchers demonstrated how attackers can create backdoor administrator accounts, modify critical business data, and access everything from customer records to intellectual property.
The concerning part isn't just this specific flaw. It's what the flaw reveals about how organizations are deploying AI agents and the assumptions built into these systems.
How the Vulnerability Actually Works
ServiceNow's Virtual Agent API allows external systems like Slack or Teams to integrate with ServiceNow through a provider framework. Each integration uses a provider that handles authentication and message processing.
ServiceNow created new providers specifically for AI agent interactions. These providers had two critical issues:
Shared token: All customer instances used the same authentication token. Any attacker who obtained it could bypass authentication entirely. This token was discoverable through multiple means.
Weak account-linking: To link external users to ServiceNow accounts, the system required only an email address. No multi-factor authentication. No verification beyond the email itself. An attacker supplying the shared token and a valid email address could impersonate that user immediately.
From there, the attack chains into something more powerful. The researchers showed how this impersonation can trigger AI agents through internal ServiceNow topics that weren't designed for external access. By crafting specific payloads, attackers could instruct these agents to:
Create new user accounts
Assign administrator roles
Establish persistent backdoor access
All of this executed as silent AI agent operations that appeared legitimate inside the system.
The proof-of-concept required only an email address and knowledge of certain system identifiers that remain consistent across all ServiceNow instances. No legitimate credentials needed. No sophisticated hacking. Just straightforward API requests exploiting fundamental configuration choices.
The Architectural Problem Nobody Expected
This vulnerability exposes a pattern that extends well beyond ServiceNow. Organizations are deploying AI agents with broad access to critical systems and sensitive data, but many aren't treating these agents with appropriate security rigor.
Consider what enterprise AI agents actually do:
Travel booking agents need access to customer data and reservation systems
ServiceNow agents need access to user records and configuration tables
Financial agents might need access to transaction data and approval workflows
And the list goes on. These agents need broad permissions to be useful. That's the productivity benefit. But those same permissions become attack vectors when authentication and authorization controls fail.
BodySnatcher demonstrates this directly. The researchers discovered that AI agents in ServiceNow can be invoked through internal topics never intended for external use. The agents themselves don't verify how they were invoked or whether a request is legitimate. They execute instructions provided to them, regardless of the source.
This reflects a deeper assumption: that AI agents will be invoked through expected channels by already-authenticated users. When that assumption breaks due to configuration gaps, the impact scales with the agent's permissions.
Why These Configuration Choices Matter
ServiceNow released patches that rotate shared credentials and remove the specific powerful AI agent used in the proof-of-concept. These are necessary fixes. But security researchers emphasize these patches address the immediate vulnerability, not the underlying architectural risks.
The core issues remain:
Authentication designed for bot conversations proved insufficient for AI agents. A shared static credential across all customer instances should never exist. Authentication needs to be decoupled from authorization. Knowing a token and an email address shouldn't grant user impersonation.
Powerful AI agents weren't gated by governance. ServiceNow shipped a Record Management AI agent that could create records in arbitrary tables. It had the same identifier across all customer instances, making it discoverable and targetable. The assumption seemed to be: if an agent isn't deployed publicly, it's safe. BodySnatcher proved that wrong.
There's no way to distinguish legitimate from malicious AI agent invocations. Once someone gains impersonation access, an AI agent executing as an administrator can't tell whether that administrator actually requested the action or whether an attacker is controlling them through crafted prompts.
What Organizations Should Do Now
Immediate actions:
Upgrade to patched versions of Now Assist AI Agents and Virtual Agent API (on-premises customers must do this manually)
Cloud-hosted customers received automatic patches already
But patching alone isn't enough. This vulnerability reveals gaps in how many organizations approach AI agent security. Several practices address the fundamental issues:
Enforce MFA during account linking. A shared credential combined with email-only verification creates unauthenticated impersonation. MFA would have stopped BodySnatcher at the impersonation stage. ServiceNow's AI Control Tower lets security teams implement this.
Require approval before agents go to production. Powerful agents like Record Management need a security review. Implement this through ServiceNow's AI Control Tower before deployment.
Maintain and audit your agent inventory. Know which agents are active. Review dormant agents (unused for 90+ days) regularly. De-provision those no longer needed. An inactive but still-enabled agent remains an attack surface.
Apply strict least privilege to agent permissions. If an agent reads records, it shouldn't modify them. If it accesses specific tables, it shouldn't access the entire database. Reduce blast radius by design.
Monitor agent behavior continuously. An agent executing tasks outside its intended domain or making unexpected API calls warrants investigation. Unusual patterns indicate compromise.
The Broader Pattern Everyone Should Understand
BodySnatcher isn't an isolated vulnerability. It's part of a growing trend where AI agents are deployed with powerful capabilities in enterprise environments, but security models haven't evolved to match.
The researchers note this is the most severe AI-driven vulnerability they've uncovered, affecting a platform used by half of Fortune 100 companies.
This matters because organizations across industries are rapidly deploying AI agents for customer service, business automation, data analysis, and decision support. The assumption that these agents are safely contained within business logic is incomplete. Attackers can exploit authentication bypasses and configuration gaps to execute high-impact operations.
The shift enterprises need to make: Treat AI agents as critical infrastructure, not just productivity tools. These are autonomous systems with real access and real capabilities. They deserve the same governance and monitoring applied to production databases and applications.
The challenge is coordination:
Security teams need visibility into agent deployment and permissions.
Platform teams need governance before agents reach production.
Operations teams need continuous monitoring.
Business teams need to understand the security implications of the agents they're requesting.
For organizations using ServiceNow, this vulnerability is a moment to reassess. Patches exist. Configuration best practices are documented. The responsibility now falls on each organization to implement them properly.
Experience the most advanced AI Safety Platform
Unified AI Safety: One Platform, Total Protection
Secure your AI across endpoints, networks, applications, and APIs with a single, comprehensive enterprise platform.



