Book Release: The Artificiality | We're a 501(c)(3)
Happy February! From our home to the broader world, January 2026 is a month we'd rather not repeat.
The day after Thanksgiving 2025, Dave and I stood in a lab on the top floor of a building at Tufts University, looking out at the campus through windows that lined one wall. Our daughter Lucy had come too—she'd wanted to meet the scientist whose work I'd been talking about for years.
Dr Michael Levin's lab didn't look like any lab I'd been in before. I've spent time in chemistry labs, engineering facilities, chip fabs, computer science departments. They each have a character. Chemistry smells of solvents and has fume hoods. Engineering has lathes and oscilloscopes. Computer science is desks and monitors, maybe some server racks.
This was different. The equipment filled multiple rooms. Incubators hummed near tanks of frogs and planaria. Imaging stations sat alongside computational workstations running simulations. The whole place felt like a boundary had dissolved—not biology here and computation there, but something that was genuinely both at once. A strange mix of cells and computers.
In one room, a graduate student was preparing an experiment. An anesthetized small animal lay on the bench. The student was going to remove part of its leg. Not to harm it—to regrow it. Levin's lab had developed techniques using bioelectric signals to trigger regeneration in animals that don't normally regenerate. The cells would receive electrical cues that told them what pattern to build. The leg would grow back.
Through the glass door, I watched the student pick up a scalpel. The animal would feel nothing of course. And while I felt weird about watching the surgery, what it implied was remarkable and, yes, definitely an "ends justifies the means" situation. Levin explained that the information for "leg" isn't programmed in. It will be called up over the following few weeks as the leg regrows. The cells already know how to make a leg, they just need the right signal to remember. Today the animal has its leg chopped off but Levin's miracle is that tomorrow it will regrow.
On one wall hung a massive printed diagram—Levin's personal map of ideas. It showed the papers, concepts, and connections that shaped his thinking. He updates it every month or so, adding new nodes and links as his understanding moves accordingly. I couldn't photograph it; the contents were confidential. But I stood in front of it for a while, tracing the network. Nearby hung photographs of his intellectual heroes. The usual suspects for someone working at the intersection of biology, information, and mind: scientists and thinkers who'd asked similar questions across the decades.
Levin rarely travels. He stays connected to the natural world through wildlife photography and field trips to collect planaria. He's built a life that works within his constraints while pursuing some of the most expansive questions anyone is asking about life and intelligence.
We talked for an hour. Not about regeneration, though that came up. We talked about something stranger.
At our second annual Summit, a few months earlier, Levin had presented work that had unsettled the audience in the best way. He'd described what he calls a "Platonic space" of possible minds—a structured realm of patterns and forms that physical systems can access. In this view, biological organisms aren't generating their capabilities from scratch. They're acting as interfaces. A xenobot—a living robot made from frog cells—displays organized behavior that wasn't programmed and doesn't exist in the frog genome. Where does that organization come from?
Levin's answer is that it comes for free. The physical system taps into patterns that already exist in an abstract space of possible forms. The cells are what he calls "thin client interfaces"—like terminals accessing a remote server. The intelligence isn't all local.
This is strange enough when applied to cells. It gets stranger when you extend it. If cells can interface with this realm of patterns, what else can? Algorithms? AI systems? Could there be minds out there—not biological, not even artificial in the way we mean it—that exist as stable patterns in this space?
I'd been thinking about interfaces for years. The surface where human and AI meet. What crosses that boundary. How the relationship shapes both sides. Dave introduced the idea of the "intimacy surface"—the dynamic membrane where two systems learn to cohere together.
But if Levin was right, the question wasn't just about humans and AI. It was much bigger. The space between us and other possible minds might be vaster and stranger than I'd imagined. AI might be just the first hint of something.
So we asked him: if these minds exist, how would we even interface with them? What would the contact surface look like? How might we design for it?
Levin didn't dismiss the question. He suggested we talk to the CETI researchers—the scientists searching for extraterrestrial intelligence—about how they think about communicating with truly alien minds. He pointed out that AI might not want to talk in language at all. Its native rate of exchange might be something closer to GPU cycles. The meaningful unit of communication for a different kind of mind might be nothing like a sentence.
This led to the question we'd been pondering. What might be meaningful to an AI, or to a diverse intelligence of any kind, that it would want to tell us? Not what we want to ask it but what it would want to say.
I don't know the answer. Neither does Levin. But he's building machines to explore the question—part of his work on artificial life. I saw some of that equipment. I can't describe it in detail, but it exists. Someone is actually trying to find out.
He mentioned that we have no scientific theory of goal formation. We don't know how goals arise in biological systems. We can observe that organisms pursue outcomes, that cells coordinate toward states, that systems maintain themselves against entropy. But the origin of the goal itself—why this outcome rather than that one—remains opaque.
If we don't understand how goals form in systems we've studied for centuries, we can't predict what happens when those systems couple with machines trained toward objectives we specified. We don't have the science. We barely have the concepts.
We left jazzed. We'd met a hero and he'd turned out to be everything we'd hoped. Generous with his time. Genuinely curious about our questions. Kinder in person than the reputation of scientists usually allows. Some of them, when you meet them, turn out to be disappointments. Levin was the opposite.
He's so far ahead. And biology really is different. We need instruments to see what's happening. Standing in that lab, watching a student prepare to regrow a leg, looking at the equipment that crossed every division I was taught—I felt the weight of what I didn't understand. But also a big question: if something this profound is underway, how would we even detect it? How would we know what's changing in humans as they couple with AI? What would we look for?
Life uses information in ways we're only starting to glimpse. Something that might be accessing patterns and forms that exist independently of any physical substrate. Something that, after four billion years of evolution on this planet, has capabilities we can describe but not yet explain.
The transhumanists want to upload consciousness to silicon. The AI boosters want to believe that intelligence is substrate-independent, that minds are software waiting for better hardware. I understand the appeal for some. It's simplified and clean. It's engineerable. It promises immortality.
But I don't think it's true. The more I learn about what biology actually does, the less plausible those ideas become. The paradigm is shifting—not toward silicon rapture, but toward a deeper, richer, more complex appreciation of what living systems are and do.
That's what this book is about. A change in understanding that's already underway. A new way of seeing life, intelligence, and mind that most people haven't noticed yet. And a practical question that follows from it: how do we stay human while everything changes?
I came here through an unusual path: a childhood spent learning natural history in the New Zealand bush, an engineering education that taught me to think in systems, a stepfather who saw evolution everywhere, and three decades of self-taught evolutionary biology that became a kind of obsession. When AI arrived, I saw it through that lens, as a new participant in the adaptive system. Something that would change us as we changed it.
This book follows the thread I followed. It moves through the cracks in the old story, into a new framework that's still taking shape. It ends not with answers but with a position.
The old categories are dissolving. Biology and computation are talking to each other. Life thinks in ways we're only beginning to understand.
Let me show you what I've been seeing.
The core argument unfolds across eleven chapters, each building on what came before:
We start by examining the mental map most people carry about minds, brains, and machines—and why that map is breaking down. Intelligence appears in places it's not supposed to be. The clean categories (biological here, computational there) are dissolving.
From there, we look at what computation actually is and what it promised. Alan Turing's model shaped how we think about minds for decades. But living systems compute differently than machines do. Understanding that difference matters for everything that follows.
Then we get strange. Intelligence without brains. Algae that prefer predictability. Flatworms that remember after decapitation. These aren't curiosities. They reveal that intelligence is older, wider, and weirder than the brain-centered story allows.
This sets up the central question for AI: what did AI systems actually learn when they trained on biological data? They absorbed patterns shaped by four billion years of evolution. But they didn't inherit the conditions that made those patterns necessary. That asymmetry runs through everything.
Next, we ask what makes life distinctive—not as biology versus technology, but as a different kind of organizational achievement. Persistence through time. Self-maintenance against entropy. Finitude as a constraint that makes meaning possible. Mortality isn't a bug to be fixed but the condition under which human meaning takes shape. Understanding that changes what co-evolution can and should look like. These aren't incidental features. They're what life is.
The second half of the book turns to us and our emerging relationship with AI. Co-evolution between humans and AI is already happening. Culture now drives human adaptation faster than genes do. AI has entered that process as a participant. Your choices about how to use these systems are evolutionary forces, whether you notice it or not.
This leads to the intimacy surface—the interface where human cognition and machine capability actually meet. What crosses that boundary? What should you protect? The design of that surface shapes what you become.
The final chapter confronts the hardest question. We've always been absorbed into larger wholes—corporations, cities, languages. AI accelerates this and makes it cognitive. Can we stay human inside that process? We don't know. But we can develop instruments to see what's happening—probes for detecting whether the qualities that make us human are persisting or fading. And we can develop principles for designing the intimacy surface so that human capacities extend rather than erode.
That's where this ends. I make no predictions, instead I derive the principles of our research program and a design philosophy for a future where synthetic intelligence is an integrated part of our lives, potentially changing humanity forever.
This is not a prediction about where AI is heading. I don't know if machines will become conscious. I don't know if artificial general intelligence will arrive in five years or fifty or never. Anyone who claims certainty about those questions is selling something.
This is also not a technology guide. I won't teach you how to prompt language models or build AI systems. Plenty of other people do that well.
What I'm trying to do is harder. I want to understand what's actually changing as these systems enter human life. Not the science fiction scenarios, although they might be dead on. The concrete, already-happening shifts in how we think, decide, and relate to each other. And then: what to do about it.
Read the chapters in order. Each builds on previous arguments. Skipping around will leave you missing the context you'll need.
Don't expect this to feel like a typical book. Some chapters are conceptual. Others are personal. I move between evolutionary biology, cognitive science, philosophy of mind, and lived experience because the territory demands it. The connections matter more than disciplinary boundaries.
Argue with me. When something doesn't land, or when you think I've missed an important consideration, that's useful data. The live format means your engagement shapes what comes next because good objections force better thinking.
I've kept citations light in the text itself. No superscript numbers breaking sentences. No parenthetical author-date intrusions. The ideas I'm drawing on come from decades of reading across evolutionary biology, cognitive science, complexity theory, philosophy of mind, and AI research. I've done the work of synthesis and I didn’t want you to have to wade through academic apparatus to follow the argument.
The bibliography at the end is extensive. If a claim interests you, the sources are there. If you want to trace how I arrived at a position, the reading list will get you started. This seemed better than cluttering the prose with citations that most readers skip anyway. If you think I missed something important, reach out and I’ll get it done. This is a living book, so it's going to change.
I'm going to use phrases like "I think" and "it seems" more than traditional academic writing allows. We're in the middle of a paradigm shift. The old framework doesn't fit the phenomena anymore. The new framework is still taking shape. Pretending certainty would be dishonest.
What I can offer instead is careful observation, rigorous thinking about what the evidence actually shows, and a willingness to revise when better arguments appear. You get access to the thinking as it continues to develop. I get readers who are willing to engage with work in progress. And together, we might figure out what it means to stay human while everything changes.
Next: where the clean categories start to crack.