Back to Blogs
CONTENT
This is some text inside of a div block.
Subscribe to our newsletter
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Big Ideas

Episode 4 : Mortality as a Design Principle: Why Only Humans Have Skin in the Game

Published on
December 19, 2025
4 min read
Mortality as a Design Principle: Why Only Humans Have Skin in the Game - By Merritt Baer, CSO, Enkrypt AI

One of the most surprising things about building and securing AI systems is how quickly the conversation becomes existential. Not because AI is—but because we are.

Whenever I talk with other CISOs, we inevitably wander into these deeper waters.

We talk about:

And underneath all of it sits a quiet truth:

We care about these things because they can hurt us. AI does not care because it cannot be hurt.

Mortality is what makes values real

Human ethics comes from the fact that we are breakable.

Fragile. Finite.

We suffer consequences.

When an organization mishandles customer data, someone gets harmed.

When a model behaves unpredictably in a medical workflow, someone suffers.

When automation triggers a cascading failure, someone’s real life gets disrupted.

AI does not experience consequence.

We do.

That’s why mortality belongs in the design room—even if AI will never have it.

What does it mean to design with mortality in mind?

It means we resist the temptation to treat AI like software that simply “runs.”

We treat it like a force multiplier of human outcomes. Meaning:

Mortality is why we need reversible workflows.

Why we need human override.

Why recovery matters as much as prevention.

Why incident response for AI has to include not just logs and alerts, but people.

AI without mortality is power without stakes

This is precisely why the security function cannot be reduced to “model safety” or “alignment checks.”

Those matter—but they’re ingredients, not the meal.

The real work is in shaping systems whose failures are survivable.

Because someone—not something—will live with the outcome.

Next up

Installment #5 explores the “supply chain of values”: how macro forces like war, energy markets, and chip manufacturing quietly define the ethics and risks of AI.

Meet the Writer
Merritt Baer
Latest posts

More articles

Big Ideas

Episode 5 : The Supply Chain of Values: How War, Energy, and Compute Shape AI Risk

AI obligates us to see the whole picture—even the parts we may have little control over in the immediate. Explore how responsible AI demands holistic awareness, ethical foresight, and action in an interconnected world.
Read post
Big Ideas

Episode 3 : Security as Stewardship: The Human Obligations Behind Machine Intelligence

Security leaders bear responsibility for AI's human consequences. When models shape decisions affecting lives, CISOs must evolve from guards to stewards—owning the moral weight machines can't feel.
Read post
Big Ideas

Episode 2: Prediction Is Not Meaning: Why “Knowing” Isn’t the Same as Caring

AI predicts brilliantly but doesn't understand or care. CISOs must bridge this gap—adding human meaning to machine correlation in cybersecurity. Explore why prediction ≠ purpose.
Read post