Back to Blogs
CONTENT
This is some text inside of a div block.

Episode 3 : Security as Stewardship: The Human Obligations Behind Machine Intelligence

Published on
December 19, 2025
4 min read
Security as Stewardship: The Human Obligations Behind Machine Intelligence - By Merritt Baer, CSO, Enkrypt AI

Security leaders have always had a strange job. We hold responsibility for events we’re trying to prevent. When things go right, we’re invisible; when they go wrong, we’re suddenly the center of the universe.

AI doesn’t make this easier.

It makes it deeper.

AI systems don’t just process data—they shape decisions

When you deploy AI inside an enterprise, you’re not just shipping software. You’re influencing:

Each of these touchpoints is human.

Each one carries consequences.

And none of the systems that generate these outcomes feel those consequences.

Stewardship begins where AI ends

AI can execute, summarize, classify, predict, generate.

But only humans can:

Security becomes stewardship when we recognize that we are the emotional, moral, and mortal layer in the system. The part that experiences the repercussions. The part that knows that “good enough” for the model may not be good enough for the child, the customer, the patient, or the employee on the other side.

The new obligations for CISOs

Stewardship in the AI era means:

We are not guards at the gate anymore.

We’re custodians of a system that—once deployed—keeps moving whether or not we’re ready for it.

Because the AI won’t care if you get it wrong

It won’t lose sleep.

It won’t feel shame.

It won’t face the board.

It won’t have to call a family whose data was misused or a customer whose trust was betrayed.

But you might.

And that is the point.

Coming up next

Installment #4 will look at mortality as a design principle—why human finiteness creates ethics, and why AI’s lack of it creates risk.

Meet the Writer
Merritt Baer
Latest posts

More articles

Product Updates

Protecting Your AI Coding Assistant: Why Agent Skills Need Better Security

Learn how to secure AI coding assistants using defense-in-depth strategies. Discover best practices for Skills security, command allowlisting, environment isolation, and how Skill Sentinel protects against malicious Skill attacks.
Read post
Industry Trends

Your AI Conversations Aren’t Privileged - A Court Confirmed It

A federal court ruled that conversations with public AI tools are not protected by attorney-client privilege. Learn the legal risks, privacy implications, and what enterprises must do to protect sensitive data.
Read post
Industry Trends

The Hidden Security Risk in AI Coding Assistants: How Skills Can Enable Prompt Injection and Remote Code Execution

Discover the hidden security risk in AI coding assistants—how overlooked skills enable prompt injection attacks and remote code execution. Learn to protect your code and stay secure.
Read post