Back to Blogs
CONTENT
This is some text inside of a div block.

Enkrypt AI Named Most Innovative Startup at AWS re:Invent 2025, Leading the Future of AI Agent Security

Published on
December 11, 2025
4 min read

I had one of those “frame the screenshot” moments at AWS re:Invent 2025.

Sitting in the keynote, I watched Matt Garman, CEO of AWS, call out Enkrypt AI on the main stage as one of the “Most Innovative Startups” shaping the future of AI.

Not just “promising.”

Not just “interesting.”

Most Innovative.

That hit home in a way I didn’t expect.

For years, our team has been building around a problem that barely had language: how do enterprises secure AI agents—not just LLMs or models—in real production environments? Long before terms like agent security, agent governance, or MCP security entered the vocabulary, we were mapping attack surfaces and risks for organizations still trying to get their first copilots online.

At AWS re:Invent this year, it felt like the market finally said, “Yes—we see the same future.”

Why this moment mattered

When you’re building ahead of the curve, there’s a lot of silence. The constant but quiet grind mixed with not a lot of external validation. You talk to early customers, you ship features, you run experiments, you throw away more ideas than you keep.

So hearing AWS recognize Enkrypt AI from the keynote stage wasn’t just a “Hey! Look it’s us!” It was a signal that:

For our team, this means a lot. For our customers and partners, it’s further validation that AI governance, and agent oversight are essential for scaling real-world AI.

The Conversations Behind the Scenes

As always, the real insights came not from the stage but from the hallways, booths, and late-night sessions.

And with everyone, one theme kept surfacing:

You can’t safely scale AI if you can’t see, understand, and govern what your AI agents are doing.

MCP, Agentic AI, and What We’re Focused on Now

This year at AWS re:Invent, Merritt Baer and I spent a lot of time with folks who are either just starting with agents or already running them in production.

We walked through real-world patterns for:

This is exactly the problem space Enkrypt AI is built for:

Helping enterprises move fast with agents, without flying blind on risk.

And Most Importantly, Thank You

To everyone who stopped by, challenged our thinking, shared feedback, or explored partnerships in Vegas -Thank You.

To the AWS team and ecosystem, thank you for putting a spotlight on a problem we believe will define and pilot the next decade of AI.

And to the Enkrypt AI team: I’m profoundly proud of what we’ve built, the standards you hold, and the category you are helping define, while showing up each and everyday excited to innovate.

If this is where we are now, I can’t wait for what’s coming next.

Onward. 🚀

Meet the Writer
Sahil Agarwal
Latest posts

More articles

Product Updates

Protecting Your AI Coding Assistant: Why Agent Skills Need Better Security

Learn how to secure AI coding assistants using defense-in-depth strategies. Discover best practices for Skills security, command allowlisting, environment isolation, and how Skill Sentinel protects against malicious Skill attacks.
Read post
Industry Trends

Your AI Conversations Aren’t Privileged - A Court Confirmed It

A federal court ruled that conversations with public AI tools are not protected by attorney-client privilege. Learn the legal risks, privacy implications, and what enterprises must do to protect sensitive data.
Read post
Industry Trends

The Hidden Security Risk in AI Coding Assistants: How Skills Can Enable Prompt Injection and Remote Code Execution

Discover the hidden security risk in AI coding assistants—how overlooked skills enable prompt injection attacks and remote code execution. Learn to protect your code and stay secure.
Read post