Series 1: You Get to Die (and Other Rights AI Will Never Have)


Introducing Enkrypt AI’s new series on
“Rights, Wanting, and Why AI Can’t Tell You How to Live”- By Merritt Baer, CISO, Enkrypt AI
At BSides recently, I found myself saying something that made a few people laugh and a few more people look uncomfortable—which is usually how I know I’ve hit on a truth worth staying with:
“Your AI usage matters in terms of data and how you manage it. But AI can’t tell you how to live. And unlike AI, you get to die.”
I’m thinking of Max Weber’s line (paraphrased), “science cannot tell us how to live.” A reminder that once you get beyond immediate implementation, the real question remains around what world we want to live in. I spend a lot of time thinking through how security behaviors change— how they must change— in an age where AI exists. The nature of data is changing. The nature of identity is changing.
And while CISOs might not always be described as “romantic,” we are, for better or worse, people who care about how things fit together. Supply chain security? That’s about where your chips are from. Which is about geopolitics. Which is about war. Which is about energy prices. Which is—one way or another—about people.
Humans. The ones who get old, get tired, get inspired, get confused, get scared… and yes, eventually get to die.
AI does none of those things.
Why a series on “rights” or “wanting”?
At Enkrypt AI, we talk a lot about the technical scaffolding around AI systems: data controls, model evaluation, redteaming, guardrails, governance. This will always matter—because these are the mode in which we constrain real risks.
But something else has been happening in my conversations with CISOs, founders, regulators, and engineers:
AI is forcing everyone into a deeper conversation about what we value.
Not because AI wants anything. It doesn’t. And it can’t.
But because its absence of wanting—its lack of heart, ego, mortality, and all the messy human constraints—shines a brighter light on our own. What do we want out of systems that now “behave” but do not “care”? What do we owe to the humans upstream and downstream of our AI? What are our rights in a world where machines can generate but cannot desire?
This series—whatever we ultimately call it (“Rights,” “Wanting,” “The Obligations of the Living”)—is about exploring those tensions.
AI obligates us. Or invites us. Maybe both.
AI doesn’t have morals, but its deployment forces us to make behavioral decisions:

When AI becomes part of our infrastructure, it obligates us to ask harder questions. Not because the AI asked us to—or even could—but because responsible engineers and security leaders know that good safety and security looks a lot like good behavior over time. And values/outcomes drive behaviors.
AI changes the conversation not by speaking with a heart but by having none— and the vacuum is clarifying.
Parallel Processing Changed What Machines Are For
For decades, computing was fundamentally serial. Faster clocks, smarter instruction pipelines, incremental gains. Even when we distributed workloads, we were mostly decomposing problems that humans already understood how to sequence. The machine was fast, but it still proceeded one step at a time, in sequence. The breaking points and hacking techniques reflected that.
GPUs—and later TPUs, NPUs, and custom accelerators—weren’t just faster CPUs. They were architectures optimized across massive volumes of data simultaneously. Matrix multiplication. Vector operations. Gradient updates. The unglamorous math at the heart of modern machine learning.
Once you could do millions of those operations at once, something important happened:
We stopped telling computers how to solve problems, and started giving them enough compute to approximate solutions statistically.
This is the real shift— Not intelligence.
Not reasoning, but context and scale. Now, the fundamental question is trust.
From Algorithms to Systems That Behave
Functionally, parallel processing allowed models to move from brittle, rules-based tools into systems that appear adaptive.
Large language models don’t reason in the human sense. They don’t plan. They don’t understand. What they do is compress vast amounts of human-generated data into high-dimensional statistical representations—and then sample from them efficiently enough to be useful in real time.
Meanwhile, parallelism —shifting computing from rules and instructions to statistical approximation— is inseparable from silicon, power, and supply chains—meaning AI’s “intelligence” is as much geopolitical and infrastructural as it is technical.
Parallel processing created what we think of as current AI—but it also centralized power. Compute is not evenly distributed. Access, availability, and content outputs are not neutral. And— critically— even if everything is running “as it should be,” we are in a series of trust based and confidence level decisions.
Mortality Is Still the Boundary Condition
This is where I’ll return to the line that made people uncomfortable: AI doesn’t get to die.
And this isn’t just about doing security and safety in the AI you need to guardrail, it’s about living in a world where AI exists.
AI will change the behaviors of security teams because that team will use AI for security, but also because that team will live in an enterprise, and a broader world, that AI has changed.
In my view, mortality isn’t a flaw —it’s a design principle. What you do in your life matters, because you and I shall not live forever. But we create systems that survive.




