Back to Blogs
CONTENT
This is some text inside of a div block.
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Episode 3 : Security as Stewardship: The Human Obligations Behind Machine Intelligence

Published on
December 19, 2025
4 min read
Security as Stewardship: The Human Obligations Behind Machine Intelligence - By Merritt Baer, CSO, Enkrypt AI

Security leaders have always had a strange job. We hold responsibility for events we’re trying to prevent. When things go right, we’re invisible; when they go wrong, we’re suddenly the center of the universe.

AI doesn’t make this easier.

It makes it deeper.

AI systems don’t just process data—they shape decisions

When you deploy AI inside an enterprise, you’re not just shipping software. You’re influencing:

Each of these touchpoints is human.

Each one carries consequences.

And none of the systems that generate these outcomes feel those consequences.

Stewardship begins where AI ends

AI can execute, summarize, classify, predict, generate.

But only humans can:

Security becomes stewardship when we recognize that we are the emotional, moral, and mortal layer in the system. The part that experiences the repercussions. The part that knows that “good enough” for the model may not be good enough for the child, the customer, the patient, or the employee on the other side.

The new obligations for CISOs

Stewardship in the AI era means:

We are not guards at the gate anymore.

We’re custodians of a system that—once deployed—keeps moving whether or not we’re ready for it.

Because the AI won’t care if you get it wrong

It won’t lose sleep.

It won’t feel shame.

It won’t face the board.

It won’t have to call a family whose data was misused or a customer whose trust was betrayed.

But you might.

And that is the point.

Coming up next

Installment #4 will look at mortality as a design principle—why human finiteness creates ethics, and why AI’s lack of it creates risk.

Meet the Writer
Merritt Baer
Latest posts

More articles

Big Ideas

Episode 6: When AI Becomes the Price of Admission

AI fluency is no longer optional—it's the entry ticket to work, innovation, and economic participation. Explore how geography, language, politics, and resources create barriers, widening inequalities. Discover strategies to mitigate exclusion before it's normalized.
Read post
Industry Trends

NeurIPS 2025: Scale, Benchmarks, and the Signals We Should Be Paying Attention To

NeurIPS 2025 shattered records with 29,000 attendees, sparking debates on AI scale, benchmark flaws, paper volume, and the shift from security to reliability. Tanay Baswa breaks down the signals mattering most for AI's future.
Read post
Big Ideas

Episode 5 : The Supply Chain of Values: How War, Energy, and Compute Shape AI Risk

AI obligates us to see the whole picture—even the parts we may have little control over in the immediate. Explore how responsible AI demands holistic awareness, ethical foresight, and action in an interconnected world.
Read post