Back to Blogs
CONTENT
This is some text inside of a div block.

Episode 4 : Mortality as a Design Principle: Why Only Humans Have Skin in the Game

Published on
December 19, 2025
4 min read
Mortality as a Design Principle: Why Only Humans Have Skin in the Game - By Merritt Baer, CSO, Enkrypt AI

One of the most surprising things about building and securing AI systems is how quickly the conversation becomes existential. Not because AI is—but because we are.

Whenever I talk with other CISOs, we inevitably wander into these deeper waters.

We talk about:

And underneath all of it sits a quiet truth:

We care about these things because they can hurt us. AI does not care because it cannot be hurt.

Mortality is what makes values real

Human ethics comes from the fact that we are breakable.

Fragile. Finite.

We suffer consequences.

When an organization mishandles customer data, someone gets harmed.

When a model behaves unpredictably in a medical workflow, someone suffers.

When automation triggers a cascading failure, someone’s real life gets disrupted.

AI does not experience consequence.

We do.

That’s why mortality belongs in the design room—even if AI will never have it.

What does it mean to design with mortality in mind?

It means we resist the temptation to treat AI like software that simply “runs.”

We treat it like a force multiplier of human outcomes. Meaning:

Mortality is why we need reversible workflows.

Why we need human override.

Why recovery matters as much as prevention.

Why incident response for AI has to include not just logs and alerts, but people.

AI without mortality is power without stakes

This is precisely why the security function cannot be reduced to “model safety” or “alignment checks.”

Those matter—but they’re ingredients, not the meal.

The real work is in shaping systems whose failures are survivable.

Because someone—not something—will live with the outcome.

Next up

Installment #5 explores the “supply chain of values”: how macro forces like war, energy markets, and chip manufacturing quietly define the ethics and risks of AI.

Meet the Writer
Merritt Baer
Latest posts

More articles

Product Updates

Protecting Your AI Coding Assistant: Why Agent Skills Need Better Security

Learn how to secure AI coding assistants using defense-in-depth strategies. Discover best practices for Skills security, command allowlisting, environment isolation, and how Skill Sentinel protects against malicious Skill attacks.
Read post
Industry Trends

Your AI Conversations Aren’t Privileged - A Court Confirmed It

A federal court ruled that conversations with public AI tools are not protected by attorney-client privilege. Learn the legal risks, privacy implications, and what enterprises must do to protect sensitive data.
Read post
Industry Trends

The Hidden Security Risk in AI Coding Assistants: How Skills Can Enable Prompt Injection and Remote Code Execution

Discover the hidden security risk in AI coding assistants—how overlooked skills enable prompt injection attacks and remote code execution. Learn to protect your code and stay secure.
Read post