Book Release: The Artificiality | We're a 501(c)(3)
Happy February! From our home to the broader world, January 2026 is a month we'd rather not repeat.
A few years ago I heard a prominent AI researcher claim that artificial intelligence would solve death. He said it casually, the way you might describe fixing a software bug. Death as a technical problem. Mortality as an engineering challenge. The complete transformation of the human condition discussed with the gravity of a product update.
The statement revealed something about how technological culture often thinks. The researcher wasn't naïve. He understood biology, computation, the trajectory of AI capabilities. He wasn't predicting next year or even next decade. He was expressing a belief about where all this is heading: toward a future where death becomes optional, where consciousness persists indefinitely, where the constraint that has shaped every human life disappears.
I tried to take the idea seriously. What would it actually mean to make death optional? When would you choose immortality? At what point in a life would you decide that this version of yourself should persist indefinitely?
I ran the thought experiment on my own life. In my twenties, I was earnest, unformed, convinced I understood far more than I did. Locking that version of myself in for eternity sounds less like liberation than a sentence.
In my thirties, with young children, I remember moments that felt perfect—watching my daughter take her first steps, feeling a fierce wish to hold time still. But then I imagined the implications. Remaining thirty-five while my children aged past me. Watching them choose finite lives, watching them grow old and die while I stayed unchanged. An eternally young parent accumulating centuries of experience without the changes that turn experience into wisdom.
In my forties, after a cancer diagnosis, I wanted more time. When you're facing a disease that might kill you, mortality stops being an abstraction. Any technology that improves survival feels unambiguously good.
But wanting more time is not the same as wanting unlimited time. My cancer was curable. I got to continue living within the mortal framework, not escape it. That distinction clarified something. Medical advances that reduce suffering work within finitude. Proposals to eliminate death work against it. These are different projects with different implications.
Todd May helps articulate why. His argument begins with an obvious truth: death is bad. Nobody wants to die. Given the choice between dying tomorrow and living another healthy year, everyone chooses the year. That preference is rational. Death ends everything—projects, relationships, experience itself.
But May goes further. If death is bad, wouldn't immortality be good? This is where intuition fails. Immortality would remove the condition that makes meaningful action possible. Think about how deadlines work. Scarcity focuses attention. Limited time forces commitment. Choices matter because not everything can be done, not every path can be taken, not every mistake can be undone.
Now imagine literally unlimited time. Would you finish writing the novel? Would you take risks, make irreversible commitments, fully choose one life over another? Every decision could be deferred. Every alternative could be revisited later. The pressure that gives choice its weight would evaporate.
May's claim is not that immortality would be boring. It's structural. Mortality creates the scarcity that makes meaning possible. We become who we are through choices that close off other futures, through commitments that cost us alternatives, through time that moves in one direction only.
The finite nature of life isn't a limitation to be overcome. It's the condition that makes a life into a life rather than an endless accumulation of experiences.
What gives human existence its peculiar shape is the certainty that life will end combined with the uncertainty of when. We know the story will conclude. We don't know where. Everything we do unfolds against that background.
This connects to a theme running through this book. Living systems exist in continuous time. A bacterium is always metabolizing. A brain is always active. Biological computation doesn't pause between steps. It persists through duration, carrying history forward not as stored data but as lived continuity.
But temporal existence isn't just continuity. It's continuity under a horizon. Biological lives don't merely persist through time—they run out of time. The persistence has a shape because it has an ending.
Death organizes life even when we're not thinking about it. We choose careers knowing we won't master everything. We commit to relationships knowing they will end. We pursue projects knowing we'll leave some unfinished. The ending is rarely visible day to day, but it structures everything anyway.
Remove that horizon and the structure of meaning collapses. Unlimited time means unlimited deferral. Unlimited deferral means nothing must happen now. And if nothing must happen now, nothing has the urgency that meaningful action requires.
This matters for how we think about AI, because AI is entering the human world without sharing the conditions that give that world its shape.
Humans and AI are co-evolving, but not under the same evolutionary pressures.
Humans care because time runs out. We commit because not everything can be done. We take responsibility because consequences are irreversible. AI systems don't experience any of this. They don't run out of time. They don't lose futures. They don't accumulate consequence the way living systems do.
And yet these systems increasingly participate in domains shaped by finitude: decision-making, planning, prioritization, judgment, meaning-making. They help us choose among futures they will never have to forgo. They help us optimize time they don't experience as scarce. They assist in commitments they don't bear.
This is not a claim about deception or danger. It's a claim about mismatch. AI doesn't need to feel mortal to affect how mortal beings live. The co-evolutionary question is how a non-mortal system participates in a world where meaning is structured by mortality, not whether AI will transcend death.
Co-evolution doesn't require symmetry. But it requires attention to asymmetry when the asymmetries are this large.
I am not sure I'd even heard of virtue ethics until our conversation with Shannon Vallor who studies about what technology does to virtue. Her concern isn't that technology is bad, but that certain technologies make it harder to cultivate the capacities that constitute a good human life: patience, courage, practical wisdom, care for others. These capacities develop through practice, and practice requires constraint. Remove the constraints and you remove the conditions for growth.
Mortality is the ultimate constraint. Remove it and you don't get better versions of human virtues. You get something else entirely. Courage loses its meaning when there's nothing to lose. Wisdom loses its grounding when time is infinite. The depth that comes from knowing this is your one life disappears.
C. Thi Nguyen makes a related point about games. We’d done a fun podcast together—he talks fast, thinks faster. He’s a climber, took up fishing during the pandemic, and seems permanently interested in what it feels like to do something for its own sake. Because he thinks about games all the time, he’s very tuned to the difference between process and outcome. Games work, he says, because they make things harder on purpose. You could just carry the ball into the hoop, but then it’s not basketball. The rules create the effort, and the effort is the point. Remove the constraints and you don’t get a better game. You just stop playing.
Life isn't a game, but the point applies. The constraints of mortality create the conditions for meaningful engagement. Remove them and you don't get a better version of human life. You get something else—something that might be worth having, but different in kind from what we are.
Jobs once called death "life's greatest invention." The phrase sounds glib, but it points to an important reality.
Death clears the way. It makes room for change. Evolution works because individuals die. Cultures renew because generations pass. Without endings, systems accumulate until they calcify. The living would be crowded by the undying. Dave worked for Jobs and still carries real admiration for him: fierce, demanding, often brutal, but driven by a set of values about what humanity was for. He yelled. He crossed lines. But the pressure wasn’t random. It was in service of something he believed mattered. In that sense, Jobs wasn’t romanticizing death so much as insisting on limits. Without limits, nothing meaningful gets made. Without endings, the living would eventually be crowded out by what refuses to let go.
But the social consequence matters less to me than the individual one. We tell stories about our lives. Those stories have beginnings, middles, and endings. The ending isn't an afterthought—it's what makes the story a story rather than an endless sequence of events. Remove the ending and narrative coherence disappears. What would it mean to tell the story of a life that never concludes?
I want to be clear about what I'm claiming and what I'm not.
I'm not claiming that death is secretly good. It isn't. The people I've lost—my stepfather Martin, friends, family members—their deaths were losses, not hidden gifts. I grieved and continue to grieve.
The claim is different. Mortality is the condition under which human meaning takes shape. This condition includes terrible things—loss, grief, the knowledge that everyone you love will die or watch you die. These are genuinely bad. And they're inseparable from what makes human life what it is.
We can imagine other forms of meaning. Perhaps creatures exist who experience significance without finitude. AI systems might one day participate in something like that. But those forms of meaning wouldn't be human. The human version is bound to endings, to scarcity, to the certainty that life will end combined with the uncertainty of when.
This is the moral constraint on the story of co-evolution. Whatever we build, whatever we off-load, whatever we optimize, must still make sense for finite beings. If it doesn't, then it's not an enhancement of human life but a substitution for it.
The question is no longer whether AI will change human life. It already has. The question is what constraints we carry forward as those changes accelerate.
This chapter argued that finitude is not a technical limitation to be solved but a constitutive condition of human meaning. The certainty that life will end, combined with the uncertainty of when, gives weight to choice, urgency to commitment, and shape to a life. Remove it, and what remains might be continuous, intelligent, even interesting—but it wouldn't be human in the sense we recognize.
This matters for co-evolution because humans and AI are evolving together under different pressures. AI doesn't share the constraints that make human lives meaningful. It doesn't run out of time or accumulate irreversible consequence. Yet it increasingly participates in domains shaped by those constraints.
The task ahead is not to halt technological progress or protect some imagined pre-AI humanity. It's to understand where human meaning is formed and to ensure that our tools support rather than undermine it.
That requires moving from abstract values to concrete encounters—from what we believe about humanity to how we design the systems that increasingly mediate our thinking, choices, and relationships.
The next chapter turns to design: not how to build AI systems, but how to think about the interfaces where humans and AI meet. The intimacy surface can be shaped. The question is what we want it to become.