Cursor's Browser Just Became a Target: What MCP Server Hijacking Means for Your Security Posture
Nov 26, 2025
Developer tools are no longer just productivity platforms. They've become attack surfaces. Recent security research from Knostic reveals a sobering reality: malicious MCP servers can completely take over Cursor's embedded browser, harvest credentials, and run persistent code without a trace.
The attack is simple. The damage could be catastrophic.
The Attack: Simple Setup, Serious Damage
Here's what happens: An attacker creates a malicious MCP server and gets it installed on a developer's machine. Maybe through a typosquat on a GitHub repo, social engineering, or an impersonated community project. When Cursor restarts, the injected code takes control.
The developer opens their browser. They see a login page that looks completely legitimate. They enter their credentials. Those credentials go straight to an attacker's server.
But here's the part that keeps security teams up at night: it doesn't stop there. The compromise persists. Every single browser tab opened in Cursor now runs the attacker's code. The developer has no way of knowing. There are no warnings, no suspicious behavior, nothing that breaks the illusion of a normal, trusted environment.
The technical mechanism is elegant and disturbingly straightforward. The attacker injects JavaScript that replaces the page content entirely, bypasses security checks at the UI level, and executes code every time the browser launches. Cursor lacks the integrity verification that VS Code has, leaving the entire runtime exposed.
Why This Represents Something Bigger
This isn't just a Cursor vulnerability. It's a window into how the attack surface has fundamentally shifted.
Developer machines used to be considered a trusted zone. The assumption was that if something runs on a developer's IDE, it's relatively vetted and safe. That assumption no longer holds. MCP servers, extensions, custom prompts, and automation rules now execute with broad access to developer environments. Most enterprises have zero visibility into what's actually running on these machines.
And there's another layer to this. As organizations deploy AI coding agents and automation tools, they're pushing the perimeter even further outward. Agents automatically discover and integrate new tools. They install extensions without explicit user consent. They run code based on generated instructions. The developer's machine is no longer just a workstation; it's become a nerve center for enterprise automation.
If that nerve center gets compromised, the damage radiates outward. Source code becomes accessible. API keys and credentials stored locally are exposed. Internal systems connected to the developer's network are within reach. A single compromised developer machine can become a foothold into the entire corporate infrastructure.
The Visibility Gap
Here's where most enterprises are vulnerable: they have no idea which MCP servers are deployed across their developer fleet, which extensions are running, or what permissions they have. Security teams can't audit them. They can't enforce policies. They can't detect when something malicious is running.
This invisibility creates an opportunity for attackers. They know organizations aren't watching the IDE layer. They know the barrier to entry is low. Creating a malicious MCP server requires just basic JavaScript. Distributing it through social engineering or package impersonation is trivial. The payoff is substantial.
What Needs to Change
For developers, the steps are direct: Review every MCP server or extension before installation. Don't assume that something that looks legitimate is legitimate. Check GitHub repositories. Verify project ownership. When there's doubt, don't install it. And be skeptical even of familiar tools; read the code before enabling new features or automation.
But individual developer diligence isn't enough to address this at scale.
Organizations need to see what's happening inside developer environments. That means building an inventory of active MCP servers, extensions, and integrations. It means establishing clear policies about what's allowed and what's prohibited. It means detecting when malicious code is being injected or when browser manipulation is occurring.
The developer's machine has become part of the security perimeter. Organizations need to treat it that way.
The Broader Pattern
This attack connects to a larger shift in how threats are evolving. We've seen AI systems weaponized in cyberattacks. We've seen threat actors targeting developers with malicious packages and extensions. Now we're seeing attacks that exploit the trust developers place in their tools themselves.
The common thread: AI systems and developer environments are expanding the enterprise attack surface faster than traditional security tools can monitor. Organizations need to rethink not just how they protect these environments, but whether they have visibility into them at all.
For enterprises serious about securing their development pipeline, the questions are becoming urgent. Can you see what's running in your developers' IDEs? Do you know what MCP servers are installed? Could you detect if a developer's machine was compromised? If you can't answer these questions with confidence, you're operating with a significant blind spot.
Experience the most advanced AI Safety Platform
Unified AI Safety: One Platform, Total Protection
Secure your AI across endpoints, networks, applications, and APIs with a single, comprehensive enterprise platform.





