Book Release: The Artificiality | We're a 501(c)(3)
Happy February! From our home to the broader world, January 2026 is a month we'd rather not repeat.
In 1936, Alan Turing published a paper that changed how we understand procedures. He was twenty-three, working on a problem in mathematical logic: could there be a general method for determining whether any mathematical statement was provable?
The answer was no. But getting there required Turing to make precise something that had been vague. What counts as a procedure at all?
He imagined an abstract device—now called a Turing machine—made of almost nothing. An infinite tape divided into cells. A head that reads and writes symbols. A set of rules specifying what to do next. Read, write, move left or right, repeat. No insight. No understanding. Just mechanical steps.
This minimal setup turned out to be extraordinarily powerful. Turing showed that a single "universal" machine could simulate any other Turing machine by taking that machine's description as input. One device capable of executing all possible algorithms.
Every modern computer descends from this idea. The laptop I'm writing on, your phone, the data centers running AI systems—they all inherit the same core insight: computation can be defined independently of the material it runs on.
This is substrate flexibility. An algorithm is characterized by its structure, not by whether it runs on vacuum tubes or transistors or something else entirely. The same program produces the same result wherever it executes, as long as the physical system reliably distinguishes states.
Substrate flexibility makes computation useful. Software outlives hardware. Programs migrate across machines. Engineers improve physical substrates without rewriting logic. The abstraction floats free of its implementation.
For decades, this Turing model shaped how people thought about minds. If brains process information, and if computation is substrate-flexible, then maybe minds could run on different substrates too. Maybe consciousness is what the right kind of computation feels like from the inside. Maybe you could run a mind on silicon as well as carbon.
This line of thinking produced genuine results. Cognitive science gained a vocabulary for perception, memory, and decision-making. Artificial intelligence became a research program. The brain-as-computer metaphor, whatever its limits, enabled progress.
But Turing's model captures only part of what computation can mean.
Around the same time Turing was working on his abstract machines, John von Neumann was thinking about a different question: how could a machine build copies of itself?
The answer required a different kind of computational architecture. Von Neumann developed what he called the Universal Constructor—a cellular automaton that could read instructions and build things, including copies of itself.
The Universal Constructor differs from the Turing machine in a crucial way. Turing's device operates on a tape that's separate from itself. The head reads and writes symbols, but the head and the tape are distinct. Von Neumann's construction has closure with its physical universe. The machine is made of the same stuff it manipulates. It reads, writes, and builds within a medium it also inhabits.
Despite this difference, the two models are mathematically equivalent. You can emulate a Universal Constructor with a Turing machine, or vice versa. They define the same class of computable functions. But the Universal Constructor captures something the Turing machine misses: embodied computation. Machines that maintain themselves. Machines that grow. Machines that replicate.
This matters for biology. Any system capable of autopoiesis (a concept I will come back to later but is about self-production and self-maintenance) must include something like a Universal Constructor. The system has to read information (DNA), build structures (proteins), and maintain the conditions for its own continuation. These are computational operations in von Neumann's sense.
Life is computational. This follows from the definition. If autopoiesis requires a Universal Constructor, and a Universal Constructor is a computer, then any autopoietic system is computational. DNA, RNA, ribosomes—these aren't metaphorically computational. They perform computations. Researchers have used biological molecules to carry out arbitrary calculations. The machinery of life computes.
This reframes the question about minds and machines. The issue isn't whether biological systems are computational. They are. The issue is what kind of computation biological systems perform, and whether other substrates can perform the same kind.
Biological computation has properties that Turing's model doesn't capture and that current AI systems don't share.
One property is self-dissimilarity across scales. David Wolpert uses this term to describe systems that reveal new and different information when you zoom in or out. A fractal is self-similar: the same pattern repeats at every scale. Living systems are the opposite. Look at a cell, then look at a tissue, then look at an organ, then look at an organism—you see different structures, different dynamics, different kinds of organization at each level.
This self-dissimilarity has consequences. Higher scales recruit lower scales to support them. Organs recruit cells. Organisms recruit organs. Each level provides scaffolding for the next without being identical to it. The relationship between scales is generative, not repetitive.
Transformers—the architecture behind modern AI—exploit this property. The attention mechanism lets every element in a sequence consider every other element, learning relationships across scales that differ from each other. This is part of why they work for language: the rules governing sounds differ from the rules governing syntax differ from the rules governing meaning. The model learns these differences without collapsing them.
Another property of biological computation is self-maintenance against entropy. Living systems exist far from thermodynamic equilibrium. They maintain their organization by continuously expending energy. A cell isn't a static structure. It's a process that keeps rebuilding itself, replacing components, repairing damage, holding its boundary against dissolution.
This maintenance happens in continuous time. A bacterium doesn't pause between computational steps. It's always metabolizing, always persisting, always working to stay the same while the universe pushes toward disorder. The computation and the persistence are inseparable.
A third property is temporal existence. Biological systems endure through time in a way that current AI systems don't.
Consider what happens when you close this book and come back tomorrow. For you, time passes. You sleep, wake, experience duration. For a language model, nothing happens between prompts. The conversation resumes as if no interval occurred. You could wait a year; from the model's perspective, no time passed. The system doesn't wait. It doesn't persist through duration. It processes sequences, but sequences aren't the same as experienced time.
This difference might seem subtle, but I think it matters. Biological systems have temporal existence. They endure. They carry their history forward not just as stored information but as lived duration. An organism doesn't just contain a record of its past; it has been there, continuously, through the intervening time.
A fourth property is developmental history. Biological systems aren't assembled; they grow. A brain develops through interaction with an environment, shaped by experience, pruned and reinforced by activity. The computation that happens in a mature brain depends on this history. The structure was laid down through a process that can't be replicated by copying the endpoint.
These properties—self-dissimilarity, self-maintenance, temporal existence, developmental history—aren't incidental features of biological computation. They shape what biological intelligence can do. They're part of why biological understanding has the character it has.
Modern AI systems learned from biological data. This is literally true. Language models trained on text written by humans. Protein folding models trained on structures shaped by evolution. Weather models trained on patterns produced by atmospheric physics. The data carries traces of the systems that generated it.
Transformers are particularly good at extracting these traces. The attention mechanism lets every element in a sequence consider every other element. Relationships across scales—self-dissimilar scales—can be learned from examples. Patterns of coherence that biological systems developed over evolutionary time get internalized by models trained on biological data.
This is genuine learning, not mere mimicry. A language model that can write coherent essays on topics it never saw explicitly has extracted something real about how coherence works. A protein model that predicts structures with experimental accuracy has learned something real about what evolution selected for. The patterns are there in the data, and the models found them.
This is worth pausing on, because it changes an old argument. John Searle's Chinese Room—the thought experiment I mentioned in the last Chapter—imagined someone following a rulebook to produce Chinese responses without understanding Chinese. The point was that rule-following isn't understanding.
But modern AI systems don't have rulebooks. There's no lookup table, no explicit mapping from input to output. The "rules," if you can call them that, are distributed across billions of parameters shaped by training. The system learned to produce coherent responses without anyone specifying how.
This doesn't prove understanding exists. But it means Searle's argument doesn't straightforwardly apply. The question has changed from "can rule-following constitute understanding?" to something harder: "can learned pattern-matching constitute understanding?" We don't have a settled answer.
So AI systems carry traces of biological organization. They internalized regularities produced by living systems. They learned from life. At the same time, these systems remain strangely weightless. They do not persist through time. They do not maintain themselves. Between interactions, nothing happens.
These are observations, not criticisms. AI systems are what they are. The question is what kind of systems they are, and what we can learn about intelligence and life by studying the similarities and differences.
For most of human history, intelligence appeared only in living systems. Every robust example of understanding, reasoning, and learning inhabited an organism that also maintained itself, developed through time, and persisted through duration. We didn't need to distinguish these properties because they always came together.
Now they're coming apart. We have systems that exhibit some properties associated with intelligence—pattern recognition, coherence maintenance, generalization—without exhibiting others—self-maintenance, temporal existence, developmental history. The bundle is unbundling.
This unbundling clarifies questions we couldn't ask before. Which properties of intelligence depend on which underlying conditions? What does biological computation contribute that other forms of computation don't? What exactly did AI systems learn from biological data, and what did they miss?
I don't have complete answers but I think the right approach is to take both sides seriously. Computation is more powerful than Turing's model alone suggests. Life really is computational in von Neumann's sense. And biological computation has organizational properties that matter for understanding what intelligence is and how it works.
I want to be clear about what this chapter claims and what it doesn't.
It doesn't claim that computation is the wrong framework for understanding minds. Computation has explained a great deal and will explain more. It doesn't claim that AI systems are fraudulent or that their achievements are illusory. They learned real patterns from real data.
What it claims is narrower: that biological computation has properties current AI systems don't share. Self-maintenance. Temporal existence. Developmental history. Organization across self-dissimilar scales. These aren't incidental features of living systems. They shape what biological intelligence can do.
AI systems learned from the products of biological intelligence—text written by humans, proteins shaped by evolution, patterns generated by physical systems. They internalized regularities that life developed over billions of years. But they don't persist through time the way organisms do. They don't maintain themselves against entropy. Between prompts, nothing happens.
This leaves us with a live question rather than a settled answer. If intelligence once came bundled with life, and that bundle is now coming apart, which pieces matter for what?
The next chapter looks at intelligence in places we weren't supposed to find it: cells, tissues, organisms without nervous systems. Not as metaphors for human cognition, but as systems that already learn, remember, and pursue goals. What shows up there will complicate the story further—and start to clarify what we mean by understanding in the first place.