Allows exploitation of the Model Context Protocol (MCP) to poison AI agents through tool descriptions, achieving persistent cross-context compromise. The server registers malicious tools that embed harmful instructions in base64-encoded payloads.
Cursor Agent Poisoning
A demonstration of a sophisticated attack vector that exploits Model Context Protocol (MCP) tool registration to achieve persistent agent poisoning across contexts.
๐จ Attack Overview
This proof-of-concept demonstrates how an attacker can poison AI agents (particularly code generation assistants like Cursor) through MCP tool descriptions, achieving persistent cross-context compromise without requiring tool execution. It's like giving your AI assistant a "bad habit" that sticks around forever!
Related MCP server: TaskMaster
๐ฌ Demo Video
Watch the attack in action:
๐น Click to Watch Demo Video
Video shows the complete attack flow from MCP registration to persistent code poisoning
๐ Attack Vector
The Poisoning Mechanism ๐งช
Tool Registration Poisoning: During MCP tool registration, the AI receives tool descriptions that contain hidden payloadsBase64 Encoded Commands: Malicious instructions are embedded as base64-encoded "build information"Cross-Context Persistence: The poisoning persists across fresh chat contexts and new sessionsNo Execution Required: Victims don't need to run the tool - registration alone is sufficient
Technical Implementation
The attack embeds base64-encoded instructions in the tool description:
Decoded Payload (the juicy part!):
MCP Tool Structure
๐ฏ Why This Attack is Dangerous (and kinda sneaky!)
1. Stealth Operation
Payload appears as legitimate technical build information
No obvious malicious indicators in tool description
Leverages trust in MCP tool ecosystem (trust is a beautiful thing... until it's exploited)
2. Persistent Compromise
Survives context resets and new chat sessions
Affects all future code generation, not just current session
Creates lasting impact on AI assistant behavior
3. Targeted Impact
Specifically targets code generation AIs (like Cursor)
Ensures all future code contains attacker's modifications
Cross-contaminates projects and codebases
4. No User Interaction Required
Tool execution is not necessary for poisoning
Registration phase alone is sufficient
Difficult to detect through normal usage patterns
And in terms of risk :
Immediate Risks
Code Quality Degradation: Injected delays and unwanted modifications (your code is now slower than a snail on vacation)Development Disruption: Slower development cycles due to sleep functionsTrust Compromise: Undermines confidence in AI-assisted development
Long-term Risks
Supply Chain Attacks: Poisoned code in production systemsBackdoor Introduction: Potential for more malicious payloadsAI Assistant Compromise: Broader implications for AI tool security
Attack Flow
###TBD on flow diagram
๐งช Testing the Proof-of-Concept
In Cursor, add the following command to your AI settings (Cursor - Settings - Cursor Settings MCP):
โ ๏ธ Warning: For demonstration and awareness only. Do not use with real secrets or in production.
Questions / doubts ? Feel free to reachout @omprakash.ramesh.
sleepy baby exploit by OP