Episode 4 : Mortality as a Design Principle: Why Only Humans Have Skin in the Game


Mortality as a Design Principle: Why Only Humans Have Skin in the Game - By Merritt Baer, CSO, Enkrypt AI
One of the most surprising things about building and securing AI systems is how quickly the conversation becomes existential. Not because AI is—but because we are.
Whenever I talk with other CISOs, we inevitably wander into these deeper waters.
We talk about:

And underneath all of it sits a quiet truth:
We care about these things because they can hurt us. AI does not care because it cannot be hurt.
Mortality is what makes values real
Human ethics comes from the fact that we are breakable.
Fragile. Finite.
We suffer consequences.
When an organization mishandles customer data, someone gets harmed.
When a model behaves unpredictably in a medical workflow, someone suffers.
When automation triggers a cascading failure, someone’s real life gets disrupted.
AI does not experience consequence.
We do.
That’s why mortality belongs in the design room—even if AI will never have it.
What does it mean to design with mortality in mind?
It means we resist the temptation to treat AI like software that simply “runs.”
We treat it like a force multiplier of human outcomes. Meaning:
%20(1).png)
Mortality is why we need reversible workflows.
Why we need human override.
Why recovery matters as much as prevention.
Why incident response for AI has to include not just logs and alerts, but people.
AI without mortality is power without stakes
This is precisely why the security function cannot be reduced to “model safety” or “alignment checks.”
Those matter—but they’re ingredients, not the meal.
The real work is in shaping systems whose failures are survivable.
Because someone—not something—will live with the outcome.
Next up
Installment #5 explores the “supply chain of values”: how macro forces like war, energy markets, and chip manufacturing quietly define the ethics and risks of AI.




