Anthropic MCP Design Vulnerability Enables RCE, Threatening AI Supply Chain
On April 20, 2026, cybersecurity researchers uncovered a critical vulnerability within the Model Context Protocol (MCP) architecture used by Anthropic, which could lead to remote code execution (RCE) and pose a significant threat to the artificial intelligence (AI) supply chain. This flaw allows attackers to execute arbitrary commands on any system that utilizes a vulnerable MCP implementation, potentially granting them access to sensitive user data, internal databases, API keys, and chat histories.
Overview of the Vulnerability
The research conducted by OX Security, led by Moshe Siman Tov Bustan, Mustafa Naamnih, Nir Zadok, and Roni Bar, revealed that the vulnerability is inherent in Anthropic’s official MCP software development kit (SDK). This flaw affects multiple programming languages, including Python, TypeScript, Java, and Rust, impacting over 7,000 publicly accessible servers and software packages with more than 150 million downloads.
Technical Details
The root of the issue lies in unsafe defaults concerning MCP configuration via the standard input/output (STDIO) transport interface. The researchers identified ten vulnerabilities across various popular projects, including:
- CVE-2025-65720 (GPT Researcher)
- CVE-2026-30623 (LiteLLM) – Patched
- CVE-2026-30624 (Agent Zero)
- CVE-2026-30618 (Fay Framework)
- CVE-2026-33224 (Bisheng) – Patched
- CVE-2026-30617 (Langchain-Chatchat)
- CVE-2026-33224 (Jaaz)
- CVE-2026-30625 (Upsonic)
- CVE-2026-30615 (Windsurf)
- CVE-2026-26015 (DocsGPT) – Patched
- CVE-2026-40933 (Flowise)
Categories of Vulnerabilities
The vulnerabilities can be categorized into four main types, all of which can trigger remote command execution on the server:
- Unauthenticated and authenticated command injection via MCP STDIO.
- Unauthenticated command injection through direct STDIO configuration with hardening bypass.
- Unauthenticated command injection via MCP configuration edits through zero-click prompt injection.
- Unauthenticated command injection through MCP marketplaces via network requests, leading to hidden STDIO configurations.
Implications of the Vulnerability
The researchers emphasized that the MCP’s architecture allows a direct link between configuration and command execution through the STDIO interface across all implementations. This design flaw means that any command that successfully establishes an STDIO server could potentially execute arbitrary operating system commands.
Interestingly, similar vulnerabilities have been reported independently over the past year, including:
- CVE-2025-49596 (MCP Inspector)
- CVE-2026-22252 (LibreChat)
- CVE-2026-22688 (WeKnora)
- CVE-2025-54994 (@akoskm/create-mcp-server-stdio)
- CVE-2025-54136 (Cursor)
Response from Anthropic
Despite the severity of the findings, Anthropic has chosen not to modify the MCP protocol’s architecture, labeling the behavior as “expected.” While some vendors have issued patches for their implementations, the core issue remains unaddressed in Anthropic’s reference implementation, leaving developers exposed to execution risks.
According to OX Security, the architectural decision made by Anthropic has inadvertently propagated risks across every language, downstream library, and project that relied on the protocol, illustrating how a single design flaw can escalate into a widespread supply chain vulnerability.
Recommendations for Mitigation
To mitigate the risks associated with this vulnerability, cybersecurity experts recommend the following measures:
- Block public IP access to sensitive services.
- Monitor MCP tool invocations closely.
- Run MCP-enabled services in a sandbox environment.
- Treat external MCP configuration input as untrusted.
- Only install MCP servers from verified sources.
Conclusion
The discovery of this vulnerability highlights the need for rigorous security practices in the development and deployment of AI systems. As AI technologies continue to evolve, understanding and addressing vulnerabilities within their architectures is crucial for maintaining the integrity of the AI supply chain.
Note: The information presented in this article is based on the findings of cybersecurity researchers and is intended for educational purposes. Always consult with a cybersecurity professional for tailored advice and solutions.

