Book Release: The Artificiality | We're a 501(c)(3)
Happy February! From our home to the broader world, January 2026 is a month we'd rather not repeat.
If you step back from the metaphors we usually use, life does something genuinely unusual. It works, continuously, to keep itself going. It maintains form in the face of disturbance, repairs itself when damaged, and resists the steady pull toward equilibrium that governs nonliving systems. Left alone, rocks erode and gases disperse. Left alone, living systems do not settle. They intervene.
This is not a poetic observation. It is a structural one.
A living system is never complete. It cannot be paused without consequence. At every moment, it must do enough work to remain itself. Cells replace their components constantly. Organisms rebuild tissues. Species adapt or disappear. Life exists only as an ongoing process of self-maintenance, carried out under conditions that are never fully under its control.
This alone marks a decisive difference between living systems and machines. A computer can be turned off and turned back on without anything essential changing. A living system cannot. Interruption is damage. Persistence is not optional; it is the primary task.
That task shapes everything else that follows, including intelligence.
The standard story about life puts replication at the center. Life began when molecules learned to copy themselves. Natural selection took over. Complexity followed. But this story has a problem. Replication is hard. Accurate copying requires sophisticated machinery. How did chemistry bootstrap itself into reliable copying without the machinery that makes reliable copying possible?
The chemist Addy Pross suggests a different starting point. Before replication, there was persistence. Some chemical systems happened to maintain themselves longer than others. They weren't copying themselves—they were rebuilding themselves, replacing components as fast as those components degraded.
He calls this dynamic kinetic stability. A system is dynamically stable not because it's static but because it's constantly regenerating. A candle flame persists by continuously consuming fuel and producing heat. Stop the fuel supply and the flame vanishes. The persistence depends on the process, not on fixed structure.
Living systems work this way. A cell isn't a static object. It's a process that keeps replacing its own parts. Membranes get rebuilt. Proteins get synthesized and degraded. DNA gets repaired. The cell you have now shares almost no atoms with the cell you had a year ago, yet something persists—a pattern, a process, an organization that maintains itself through continuous turnover.
This reframes what evolution selects for. The first proto-living systems weren't selected for their ability to replicate accurately. They were selected for their ability to persist—to maintain themselves in conditions that would destroy less robust chemical networks. Replication came later, as one strategy among others for extending persistence across time.
Sara Imari Walker helped me see another dimension of this. I had the opportunity to spend a couple of hours chatting with her over cocktails after a lecture she gave in San Francisco. She had stepped onstage in fishnets, a sparkling purple mini dress, and mid-calf stomper boots, an outfit that immediately rewired the room. Then you spot the reading glasses tucked into her collar. I could relate—aging eyesight gets us all. In a field still allergic to visible personality, her sartorial choices send a signal of someone who has no interest in shrinking to fit inherited signals of seriousness.
Which brings me to the question I get to ask her: where is all this AI going to take us? To make sense of her answer, you have to start with how she thinks about life itself.
Sara's a physicist who thinks about life in terms of information—not as metaphor but as measurable quantity, as causal history accumulated in matter. Along with Lee Cronin, she developed a framework called assembly theory. The basic idea: you can measure how much history went into making something by counting the minimum number of steps required to construct it from basic parts.
A simple molecule like methane requires few steps. Carbon plus four hydrogens, done. A complex molecule like a protein requires many steps—amino acids joined in specific sequences, folded into specific shapes, modified after translation. The assembly index quantifies this: high numbers indicate deep causal history, many steps of construction building on previous steps.
Living systems produce molecules with high assembly indices. Non-living chemistry doesn't. When Sara and her collaborators analyzed molecular samples, they found a clear threshold. Below a certain assembly index, molecules could plausibly arise from random chemistry. Above that threshold, life was always involved.
This gives a potential biosignature—a way to detect life without knowing what specific chemistry it uses. If you find molecules with high assembly indices on Mars or Europa, something is probably assembling them. The chemistry is accumulating history in a way random processes don't.
Life is chemistry that builds on itself, that accumulates information over time, that uses the past to construct the present. Each generation inherits not just genes but the entire causal history embedded in biological organization. Four billion years of selection, recorded in molecular structures.
I love the way she frames our lifetimes: our lives are short, but what we embody took an immense amount of assembly to arrive. Once you see life this way—as persistence through continuous self-maintenance, as accumulated history in matter—intelligence looks different too.
Back to her answer. She said it’s less about where AI is taking us and more about what it makes possible. Life, as she sees it, is about history building up in unlikely ways. AI gives that process a new place to happen. As more of our thinking gets digitized, parts of us start living outside our bodies, able to hold more abstract forms of thought than biology ever could on its own. And once that happens, it’s not one shared future we’re heading toward, but many different realities unfolding at the same time.
Biological intelligence did not emerge as a general problem-solving capacity. It emerged as a way of staying alive in environments that change faster than genes can adapt. Intelligence, in this sense, is not about optimization in the abstract. It is about navigating uncertainty while protecting a fragile, ongoing process.
Living systems do not simply process information. They are organized around asymmetry. Some states are viable. Others are catastrophic. The system's behavior makes little sense unless this distinction is taken seriously. Hunger, injury, exhaustion, attachment—these are not accessories to cognition. They are how the system marks what matters. Even the simplest organisms distinguish between better and worse futures, not conceptually, but operationally. Without that asymmetry, behavior collapses into noise.
Machines do not have this problem. They do not deteriorate when a goal is missed. They do not need to repair themselves unless explicitly designed to. Their objectives come from outside, and their continuation is someone else's responsibility. This does not make them inferior. It makes them differently organized.
The difference shows up in how learning unfolds. Living systems learn under constraint. They cannot explore every option. They act with partial information, limited energy, and irreversible consequences. Errors accumulate. History matters. Development leaves traces that cannot be undone. Learning is not just the acquisition of competence; it is transformation. The system is changed by what it encounters, often in ways that narrow future possibilities.
Artificial systems learn without these pressures. They can be reset, retrained, duplicated, rolled back. They can explore vast possibility spaces without risk to themselves. Their learning leaves no marks unless we choose to preserve them. They acquire capability without vulnerability.
One more feature of life matters here, and it is easy to miss precisely because it is so familiar.
Life is rarely solitary.
Even when organisms appear independent, their persistence depends on others: microbial communities, ecological niches, reciprocal adaptations. Survival is almost never an individual achievement. It is distributed across relationships that unfold over time. To be alive is not just to maintain oneself, but to remain viable in a world of other living systems doing the same.
As life becomes more complex, this co-dependence intensifies. Social animals do not merely coexist. They coordinate, negotiate, compete, reconcile, and repair. These behaviors are not decorative. They reduce uncertainty, spread risk, and make futures possible that no individual could sustain alone. Sociality, at this level, is not an add-on. It is an adaptive response to living in a world that cannot be controlled individually.
For humans, this goes further. We are not just social in the sense of interacting frequently. We are socially constituted. Our survival depends on extended cooperation, shared care, and cumulative knowledge. From birth, we are unfinished beings whose development relies on others. Language, norms, skills, and values are not discovered alone. They are inherited, negotiated, and sustained collectively.
This changes what intelligence is doing. In humans, intelligence is not only about managing the physical environment. It is about managing shared worlds. Keeping track of relationships. Anticipating how others will interpret actions. Repairing breakdowns in trust. Coordinating across time with people we may never meet. The problem space is no longer just survival, but continuity of meaning across generations.
This is where something distinctly human begins to appear. Humans do not only pursue goals. We reflect on which goals are worth pursuing. We do not only act within constraints. We argue about which constraints are legitimate. We do not only experience loss. We ask what makes a life well lived in the face of loss.
Meaning, responsibility, dignity, and obligation emerge not as abstract ideals, but as practical responses to living together under conditions of finitude.
We are the only species, as far as we know, that asks these questions explicitly.
That capacity did not arise in spite of our biology. It arose because of it. As our social worlds expanded, so did the scope of what we had to hold in mind. Our cognitive light cone—to borrow Levin's phrase—grew larger, extending beyond immediate perception to include distant others, future consequences, and shared symbolic worlds. Intelligence stretched outward, not just to solve problems, but to sustain coherence across increasingly complex social realities.
This helps explain why human intelligence is so tightly bound to meaning. Meaning is not an ornament layered onto cognition. It is how a socially interdependent, finite species keeps track of what matters over time. It is how we coordinate not just action, but value.
When machines trained on biological data—proteins, weather patterns, language—they learned solutions that life arrived at under pressure. They learned patterns shaped by persistence, constraint, finitude, and co-dependence. They inherited the residue of four billion years of selection.
But they did not inherit the pressure itself.
They learned from life without bearing its conditions. They can model social interaction, generate language about meaning, and participate in coordination. But they do not depend on others for their own persistence. They do not carry responsibility forward. They do not experience the pressure of having to make a life add up to something.
This distinction is not moral. It is structural. Life, at its most complex, is not just self-maintaining. It is co-maintaining. Human intelligence is an expression of that fact. Any serious discussion of co-evolution has to begin here—with the conditions that made meaning necessary in the first place.
Life is organized around persistence, finitude, and co-dependence. Living systems maintain themselves over time, under constraint, in environments they do not control. As complexity increases, this self-maintenance becomes social. Survival depends on others. Intelligence expands to manage relationships, shared futures, and collective risk.
For humans, this expansion crosses a threshold. We are not only social animals. We are meaning-making ones. We reflect on our goals. We argue about what counts as a good life. We hold one another responsible across time. Agency, dignity, responsibility, and meaning are not abstract ideals layered onto cognition. They are practical necessities for a finite, interdependent species trying to coordinate action, value, and care across generations.
These capacities are not incidental. They are the result of a very particular biological and social history. They arise because we cannot step outside consequence, because our actions affect others we depend on, and because our lives have to add up to something while we are living them.
Meaning matters because time is limited. Responsibility matters because relationships persist. Dignity matters because bodies are vulnerable.
This is what it means, in a deep sense, to be human.
The next chapter asks what happens when systems that learned from life—but do not share its conditions—begin to shape how we live. Not replacement. Not transcendence. Co-evolution. And the question of what we want to preserve.
The ideas in this chapter draw on a long-running conversation across biology, complexity science, and cognitive science about what distinguishes living systems from engineered ones. They are shaped especially by work on biological cognition and goal-directedness (Michael Levin), irreversibility and the accumulation of structure in living systems (Sara Walker), open-ended evolution and complex adaptive systems (David Krakauer and colleagues), scale-crossing intelligence in biological and social systems (Blaise Agüera y Arcas), and the evolution of social cognition and shared meaning in humans and other animals (including work associated with Robin Dunbar, Frans de Waal, and Matthew Cobb).
I’ve deliberately kept these frameworks in the background here. This chapter is not about theory for its own sake, but about what these approaches are collectively trying to explain: why life, intelligence, and meaning take the forms they do — and why those forms matter as we begin to co-evolve with intelligent systems that do not share our biological or social history. You'll find many books in the bibliography that are relevant to this section.