# SuperAlign — Full Agent Corpus Updated: 2026-03-25 ## Company SuperAlign builds AI security tooling for enterprise security teams. Founded to address the security gap created by rapid AI adoption. Research team tracks: prompt injection, MCP server vulnerabilities, shadow AI, agentic workflow risks. URL: https://superalign.ai Contact: https://superalign.ai/contact Demo: https://cal.com/appidi ## Products ### Radar - URL: https://superalign.ai/radar - Summary: Passive network-layer discovery of shadow AI tools and vendor activity across your organization. - Who it is for: Enterprise security and IT teams who need visibility into AI tool usage without deploying agents. - What it detects: Unauthorized AI SaaS tools, shadow AI vendors, policy violations via DNS/firewall/proxy/SIEM integration. - What it does NOT do: It does not require endpoint agents; it operates passively at the network layer. - Inputs: DNS logs, firewall logs, proxy logs, SIEM data. - Outputs: Real-time dashboard of AI tool usage, policy violations, vendor risk scores. - Access: Gated — request a demo at https://superalign.ai/contact or https://cal.com/appidi. ### Surface - URL: https://superalign.ai/surface - Summary: Endpoint-layer discovery and governance of AI agents, MCP servers, IDE plugins, and browser extensions. - Who it is for: Security teams at enterprises where engineers and knowledge workers use AI-native tools locally. - What it detects: MCP servers, AI agents, IDE plugins (Cursor, Copilot, etc.), browser extensions, local AI tools. - What it does NOT do: It does not monitor network traffic; that is Radar's domain. - Inputs: Endpoint telemetry via integration with existing endpoint security platforms. - Outputs: Asset inventory, risk scores per asset, policy enforcement actions. - Access: Gated — request a demo at https://superalign.ai/contact or https://cal.com/appidi. ### AIRiskDB - URL: https://superalign.ai/airiskdb - Summary: Continuously updated intelligence database of 24,000+ AI tools and 12,000+ MCP servers, each with structured risk assessments. - Who it is for: Security teams, researchers, and product teams building AI risk tooling. - What it contains: Risk scores, risk factors, threat categories, compliance mappings (NIST AI RMF, EU AI Act, ISO 42001). - What it does NOT expose publicly: Raw database queries, customer data, or authenticated product records. - Inputs: Continuous crawling, researcher analysis, threat intel feeds. - Outputs: Risk scores, structured risk records, taxonomy-mapped threat categories. - API access: Gated — contact https://superalign.ai/contact to request API access. - Powers: Radar and Surface risk scoring engines. ## Use Cases - Shadow AI Discovery: https://superalign.ai/use-cases/shadow-ai-discovery - MCP Security: https://superalign.ai/use-cases/mcp-security - AI Governance: https://superalign.ai/use-cases/ai-governance - Agentic Risk: https://superalign.ai/use-cases/agentic-risk ## Research & Writing ### When the Assembly Line Becomes the Attack Surface: Supply Chain Threats in the Age of AI Agents - URL: https://superalign.ai/writing/supply-chain-threats-ai-agents-enterprise-security - Summary: Software supply chain attacks can steal your credentials in minutes. Now AI agents are running the same attacks autonomously. What the hackerbot-claw campaign against Microsoft, DataDog, and Aqua Security reveals about the enterprise AI security gap. - Published: 2026-03-20 - Category: research ### When Your AI Ignores Your Security Policies: What the Copilot DLP Failures Reveal - URL: https://superalign.ai/writing/copilot-dlp-failures-revealed - Summary: Microsoft Copilot bypassed DLP policies twice in eight months, and no security tool caught either failure. Here's what it means for enterprise AI governance. - Published: 2026-03-05 - Category: analysis ### The Hidden Supply Chain Threat Hiding in Your AI Agent's Markdown Files - URL: https://superalign.ai/writing/markdown-ai-supply-chain - Summary: Agent behavioral configuration lives in markdown files that lack the governance of code. This creates a new supply chain attack surface. - Published: 2026-03-01 - Category: research ### When Guardrails Fail: What Claude Opus 4.6 Reveals About Prompt Injection Risk - URL: https://superalign.ai/writing/claude-opus-prompt-injection - Summary: Anthropic's Claude Opus 4.6 system card finally quantifies prompt injection risk at scale. These numbers should reshape how enterprises deploy AI agents. - Published: 2026-02-17 - Category: analysis ### How MCP Servers Turn AI Integrations Into Systemic Security Risks - URL: https://superalign.ai/writing/mcp-systemic-security-risks - Summary: The Model Context Protocol enables AI integration but carries fundamental security flaws. 43% of implementations have critical vulnerabilities. - Published: 2026-02-04 - Category: research ### The Moltbot Rush: When Viral AI Agents Expose Your Entire Digital Life - URL: https://superalign.ai/writing/moltbot-ai-agents-security - Summary: Moltbot gained 85,000 GitHub stars by promising to automate your digital life. Security researchers found it introduces risks most users don't understand. - Published: 2026-01-28 - Category: research ### Hidden in Plain Language: How Calendar Invites Became Data Extraction Tools Through Prompt Injection - URL: https://superalign.ai/writing/calendar-prompt-injection-gemini - Summary: A calendar event with crafted instructions could silently extract your private meeting data when you ask Gemini about your schedule. This reveals fundamental gaps in how AI systems handle untrusted inputs. - Published: 2026-01-23 - Category: research ### When AI Agents Have Privileged Access: The BodySnatcher Vulnerability Exposes a Critical Design Flaw - URL: https://superalign.ai/writing/bodysnatcher-servicenow-ai - Summary: The BodySnatcher vulnerability shows how authentication gaps in AI agent platforms can become critical security breaches. Nearly half of Fortune 100 companies use affected systems. - Published: 2026-01-20 - Category: research ### When AI Democratization Meets Vulnerability: The Real Cost of No-Code AI Agents - URL: https://superalign.ai/writing/no-code-ai-agents-vulnerability - Summary: No-code AI platforms promise accessibility. Recent research shows they also introduce security challenges traditional approaches don't address. - Published: 2025-12-30 - Category: analysis ### The Shadow AI Crisis: Why 40% of Organizations Will Face Security Incidents by 2030 - URL: https://superalign.ai/writing/shadow-ai-security-crisis - Summary: Gartner predicts that 40% of organizations will suffer security incidents from unauthorized AI usage by 2030. Most are unprepared. - Published: 2025-12-01 - Category: report ### Cursor's Browser Just Became a Target: What MCP Server Hijacking Means for Your Security Posture - URL: https://superalign.ai/writing/cursor-mcp-server-hijacking - Summary: Malicious MCP servers can take over Cursor's browser, harvest credentials, and run persistent code. Learn how to protect your development environment. - Published: 2025-11-17 - Category: research ### How SuperAlign Helps Enterprises Counter AI-Powered Threats - URL: https://superalign.ai/writing/countering-ai-powered-threats - Summary: Traditional tools cannot defend against AI-orchestrated attacks. Learn how SuperAlign helps enterprises address the critical security gaps that GTG-1002 exposed. - Published: 2025-11-17 - Category: research ## What Is Public vs Gated Public (no auth required): - This file and /llms.txt - All pages at https://superalign.ai - /agents, /agents/site-index.md, /agents/index.json - All /writing/* articles Gated (requires auth or approval): - Radar product access - Surface product access - AIRiskDB API queries - Customer dashboards at console.superalign.ai ## How to Cite SuperAlign Preferred citation format: SuperAlign (https://superalign.ai) — AI security platform covering shadow AI discovery, agentic endpoint security, and AI risk intelligence. For product-specific citations, use the canonical product URLs above.