Back to Blogs
CONTENT
This is some text inside of a div block.
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Big Ideas

Episode 3 : Security as Stewardship: The Human Obligations Behind Machine Intelligence

Published on
December 19, 2025
4 min read
Security as Stewardship: The Human Obligations Behind Machine Intelligence - By Merritt Baer, CSO, Enkrypt AI

Security leaders have always had a strange job. We hold responsibility for events we’re trying to prevent. When things go right, we’re invisible; when they go wrong, we’re suddenly the center of the universe.

AI doesn’t make this easier.

It makes it deeper.

AI systems don’t just process data—they shape decisions

When you deploy AI inside an enterprise, you’re not just shipping software. You’re influencing:

Each of these touchpoints is human.

Each one carries consequences.

And none of the systems that generate these outcomes feel those consequences.

Stewardship begins where AI ends

AI can execute, summarize, classify, predict, generate.

But only humans can:

Security becomes stewardship when we recognize that we are the emotional, moral, and mortal layer in the system. The part that experiences the repercussions. The part that knows that “good enough” for the model may not be good enough for the child, the customer, the patient, or the employee on the other side.

The new obligations for CISOs

Stewardship in the AI era means:

We are not guards at the gate anymore.

We’re custodians of a system that—once deployed—keeps moving whether or not we’re ready for it.

Because the AI won’t care if you get it wrong

It won’t lose sleep.

It won’t feel shame.

It won’t face the board.

It won’t have to call a family whose data was misused or a customer whose trust was betrayed.

But you might.

And that is the point.

Coming up next

Installment #4 will look at mortality as a design principle—why human finiteness creates ethics, and why AI’s lack of it creates risk.

Meet the Writer
Merritt Baer
Latest posts

More articles

Big Ideas

Episode 5 : The Supply Chain of Values: How War, Energy, and Compute Shape AI Risk

AI obligates us to see the whole picture—even the parts we may have little control over in the immediate. Explore how responsible AI demands holistic awareness, ethical foresight, and action in an interconnected world.
Read post
Big Ideas

Episode 4 : Mortality as a Design Principle: Why Only Humans Have Skin in the Game

Explore why mortality makes human ethics real in AI security. CISO Merritt Baer argues for designing AI with fragile human outcomes in mind—reversible workflows, human overrides, and survivable failures. AI has power without stakes; we don't.
Read post
Big Ideas

Episode 2: Prediction Is Not Meaning: Why “Knowing” Isn’t the Same as Caring

AI predicts brilliantly but doesn't understand or care. CISOs must bridge this gap—adding human meaning to machine correlation in cybersecurity. Explore why prediction ≠ purpose.
Read post