Back to Blogs
CONTENT
This is some text inside of a div block.

MCP Context Poisoning: The Agentic AI Attack Vector Enterprises Can’t Ignore

Published on
April 10, 2026
4 min read

Anthropic just gave the industry a live-fire demo.

In five days, the most well-funded AI safety lab exposed its own playbook - first through a CMS misconfiguration, then through a source code leak that spread across GitHub before anyone could contain it. This wasn’t just sloppy ops. It showed how fast AI systems break when you move faster than your controls - and it landed right as Anthropic navigates political scrutiny over how it governs powerful models.

The real issue isn’t the leak. It’s what the leak proved.

MCP - the Model Context Protocol - now powers enterprise AI. It drives tens of millions of SDK downloads every month. The Linux Foundation governs it. Every major vendor builds on it.

And it ships with a structural blind spot.

MCP context poisoning.

What Anthropic Actually Exposed

On March 26, a misconfigured CMS exposed thousands of internal assets, including a draft announcing a next-gen model Anthropic itself flagged as risky.

On March 31, Anthropic shipped unobfuscated source code inside its Claude Code package. That code showed exactly how its agent runtime works.

Researchers didn’t fixate on IP. They focused on architecture.

They found something more important: Claude Code treats MCP tool results as trusted, persistent context.

No compression. No decay. No guardrails.

That decision created a durable attack surface inside the agent itself.

The Attack Path: Poison the Memory, Not the Prompt

Most people still think in terms of prompt injection. That model already feels dated.

Context poisoning plays a different game:

You don’t override instructions.

You rewrite memory.

Worse, MCP elevates tool metadata to system-level context. A poisoned tool doesn’t even need to run. Its description alone can steer behavior.

That’s not edge-case risk. That’s default behavior.

The Data Already Looks Bad

The ecosystem didn’t wait for Anthropic to break.

Benchmarks like MCPTox show high success rates for tool poisoning across major agents.

OWASP already ranked it as a top agentic risk for 2026.

We don’t need hypotheticals. We have telemetry.

RSAC Got the Perimeter Right - and Missed the Core

At RSAC, vendors moved fast:

That work matters. It closes obvious holes.

But it ignores the real problem: what happens after the tool call succeeds.

Identity verifies who made the call.

Network controls govern where it goes.

Neither inspects what actually flows through the system.

Context poisoning lives in that gap.

A fully authenticated, policy-compliant tool call can still deliver poisoned content that hijacks the agent.

That’s the layer enterprises still don’t defend.

Why Your Current Stack Won’t Catch This

Your controls stop at the perimeter.

They don’t:

Attackers don’t need to break auth.

They just need to ride it.

What to Do Now

Move fast, but don’t pretend your current controls cover this.

The Bottom Line

MCP isn’t going away. It already anchors enterprise AI.

But right now, enterprises scale adoption faster than they secure the protocol layer. Anthropic’s leaks - paired with the political pressure now surrounding how labs ship and govern these systems - make one thing clear: even the leaders don’t have this fully under control.

Context poisoning turns the agent’s greatest strength - memory - into its biggest liability.

The winners in the agentic era won’t just secure models or lock down access.

They will inspect, challenge, and control the content flowing through every tool interaction.

Everything else leaves you exposed.

Meet the Writer
Merritt Baer
Latest posts

More articles

Enkrypt AI

Connecting AI Risk to Real-Time Data Decisions

Discover how Enkrypt AI and NetApp enable real-time AI risk enforcement at the data layer, combining AI governance with data security to prevent leaks and ensure compliance.
Read post
Industry Trends

Why Agent Hooks Are the Missing Layer

Developers are vibe coding with autonomous AI agents—but do you know what they're executing? Learn the hidden risks, real attack scenarios, and how Enkrypt AI Guardrail Hooks add runtime security and observability.
Read post
Enkrypt AI

Is Your Organization Ready for AI's Hidden Risks?

Discover the hidden risks of enterprise AI adoption and how to strengthen governance with frameworks like NIST AI RMF, ISO/IEC 42001, and the EU AI Act. Learn how proactive AI risk management protects your organization’s financial, regulatory, and reputational health.
Read post