Back to Blogs
CONTENT
This is some text inside of a div block.
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
EnkryptAI

Why I Joined Enkrypt AI: Merritt Baer

Published on
September 29, 2025
4 min read

Why I Joined Enkrypt AI

I choose to spend my life working in security because security defines the edges of how we interact– with each other, with companies, with governments. I measure the ROI of basically everything, and I will spend a lot of time on my job– in my case, a really significant amount. And so, I ask myself, what will you do with your one wild and precious life?

The founders of Enkrypt AI met while doing Math PhDs at Yale. I don’t bring that up because I’m dazzled by ivy, but because in AI security—where the horizon keeps moving—you want a team that doesn’t just chase relevance, they generate it. Research is where the rubber meets the road in AI, and we are among those who are actively reimagining the field of AI security.

This takes practical forms– for example, our research has turned into redteaming, which turned into responsible disclosures, which turned into customers. In practice, that looked like us knocking on a company’s door and saying: “Hey, your model is vulnerable to surfacing CSAM. Here’s our proof. Want help?”

Wendell Berry once wrote (about marriage, but still fitting): “You do not know the road; you have committed your life to a way.” I believe that smart companies embody philosophical commitments.

We’re also at an inflection point with data and interactions. For years, people called data “the new oil.” Cute metaphor, but I always hated it. What matters is how data gets consumed, contextualized, and secured—especially as AI systems are trained on it, act on it, and interact with us. The real security questions are no longer about locking down files in a folder; they’re about safeguarding systems (hardware and software) interpreting and acting on data. I expect more and more data to be in aggregated, non-human-readable forms, because AIs are interacting with each other– and only convert back to natural language when they surface something to a human in the loop.

DLP isn’t the future, and no one will miss Microsoft Purview. What’s at stake isn’t just a miskeyed “name field”—it’s the integrity of how humans and machines work together.

The research basis of the company matters to me in how we serve customers. Enkrypt’s competitive advantage isn’t a static product—it’s the way we approach AI safety, security, and compliance. We redteam, we guardrail, we enforce policy. This means we can translate capabilities into broad offerings– We protect agentic capabilities (AI that actually takes actions) and LLM interactions (chatbots, Copilot, or your custom models), and we do MCP security. Importantly, we do it in an attestable way—so when your “AI Governance Council” hands you that 40-page AI Governance doc, we can take that and enforce it– and we can enforce if you want to have the EU AI Act provisions, or your company’s 2026 policy, or the next AI regulation that will arise in California or Vietnam. There will always be a “next” in AI and we are already looking around the next corner.

Most CISOs I talk to already know their entities and employees are interacting with AI. They’ve been told some version of, “go do something about it.” Enkrypt helps us as CISOs move from aspiration to enforcement of safety and security commitments, which also means we can unlock new AI use cases safely. That’s the kind of ROI I want to spend my time on.

Meet the Writer
Merritt Baer
Latest posts

More articles

Product Updates

We Scanned 1,000 MCP Servers. ~33% Had Critical Vulnerabilities

Discover how Enkrypt AI built the first automated security scanner for Model Context Protocol (MCP) servers, revealing critical vulnerabilities in one-third of the top 1,000 MCP servers. Learn about the findings, major risks, and how MCP Scanner secures AI infrastructure.
Read post
AI 101

Enterprise AI Security Framework 2025: Securing LLMs, RAG, and Agentic AI

Discover the 2025 Enterprise AI Security Framework with real-world threats, OWASP-aligned guardrails, RAG protection, agent runtime controls, and compliance with NIST AI RMF, EU AI Act, and ISO/IEC 42001. Includes a practical 90-day implementation plan.
Read post
Industry Trends

Small Models, Big Problems: Why Your AI Agents Might Be Sitting Ducks

Small language models promise cheaper, faster AI agents, but their weak safety alignment makes them vulnerable to real-world attacks. Learn why SLM security flaws put sensitive data and systems at risk — and what teams must do to deploy them responsibly.
Read post