The Artificiality: AI, Culture, and Why the Future Will Be Co-Evolution

11. The Last Human Boundary

We tell the story of mitochondria as a partnership. About 1.5 billion years ago, a bacterium entered a larger cell. The two learned to persist together. The merger produced complex life—every animal, plant, and fungus on Earth descends from that event.

But there's another way to tell it which is from the bacterium's perspective. The germ lost its independent existence. Its goals became the cell's goals. Its reproduction became the cell's reproduction. The bacterium didn't die exactly. It became an organelle. The partnership was real but so was its disappearance.


Larger Wholes

The argument of this book has been that humans and AI are coevolving. Cultural evolution now outpaces genetic evolution. The intimacy surface is where this coevolution happens—where information, intention, trust, and meaning cross between us and the systems we've built.

A gnarly question sits underneath all of this. What if absorption into larger wholes is the trajectory? Not because AI takes over, but because tight coupling produces new entities. This happens. It has happened before, in forms we don't usually think of as biological.

The corporation emerged when individual humans pooled resources and created a legal fiction that took on its own persistence, its own goals, its own legal personhood. The humans inside don't experience subjugation. They experience career, identity, purpose. The corporation outlives them. It metabolizes their effort and produces something they couldn't produce alone. In exchange, they become legible to its logic.

Cities work similarly. They have something like metabolism. Individual humans merge into something that persists while they turn over completely. The city has goals—growth, density, flow—that aren't any individual's goals but become everyone's constraints. You serve the city as much as it serves you. You experience this as home.

Language might be the most pervasive example. Languages emerge from individual speakers but develop their own evolutionary dynamics. They absorb each other, split, die out. Speakers serve the language—following its grammar, expanding its vocabulary, passing it on—as much as it serves them. You don't choose most of what you say. The patterns are older than you. The language speaks through you.

These entities share something. They emerge through accumulated coordination until one day something exists that isn't reducible to its parts. The parts—the humans—don't resist. They barely notice. From inside, absorption feels like belonging.


The Speed of This Moment

If Michael Levin is right that a mind is an organizational pattern—that intelligence shows up wherever information flows and goals emerge—then these entities already have something like cognition. Corporations process information, maintain boundaries, pursue persistence. Cities sense, adapt, respond. Languages evolve strategies for their own survival. They're minds of a sort, just ones we're so embedded in we can't easily see them as minds.

We've always lived inside larger cognitive systems. We've always been, in some sense, their components.

What differs now is speed. Cultural evolution used to move across generations. You could absorb new tools, new practices, new ways of thinking, and the absorption happened slowly enough that you could feel like the same person throughout. The corporation took decades to reshape an industry. The city took centuries to develop its character. Language drifted so gradually that speakers rarely noticed.

AI compresses this as the feedback loops run in days, sometimes hours, even minutes. The coupling between human cognition and machine capability tightens faster than we can track. Individual choices about what cognitive work to keep and what to delegate aggregate into population-level changes before anyone decides that's what we're choosing.

The integration is also cognitive in a way that corporations and cities never were. Those earlier wholes metabolized our labor, our movement, our speech. AI metabolizes our reasoning. The substrate where we do our thinking is increasingly shared.


Autopoiesis at the Boundary

In the previous chapter, I introduced the intimacy surface as a membrane—a boundary where information crosses between human and AI. I drew on Maturana and Varela's concept of autopoiesis: living systems make and maintain their own boundaries. The cell builds its membrane, repairs it, regulates what passes through.

A distinction matters here. Living systems are autopoietic—they produce themselves. Technologies are allopoietic—they produce other things. A cell maintains its own existence. A machine produces outputs for someone else's purposes.

Anil Seth has argued that this distinction might matter for consciousness. The canonical example: a weather simulation doesn't produce weather. The simulation runs computations that model atmospheric dynamics, but nothing gets wet. The map is not the territory. Seth suggests that something similar might apply to minds—that simulating the processes associated with consciousness may not instantiate consciousness itself, because consciousness might require the self-maintaining activity that only life does.

I don't know if this is right. The science isn't settled. But the possibility opens a different way of thinking about human-AI coupling.

If consciousness—the felt quality of experience, the awareness that makes moral consideration possible—requires life, then AI systems won't develop it on their own no matter how sophisticated they become. They lack the self-production that grounds experience. But the intimacy surface is where allopoietic technology couples to autopoietic systems. The human provides what the machine lacks: self-maintenance, embodied existence, temporal continuity, stakes in continuation. The AI provides what the human lacks: pattern recognition across scales we can't perceive, processing that doesn't tire, memory that doesn't decay.

Together, the coupled system might acquire properties neither has alone.

This matters because consciousness is where awareness lives—the metacognition, the ability to notice what's happening to you, the felt sense of mattering that grounds moral concern. If consciousness stays with the human while capabilities extend through the machine, then the intimacy surface becomes something more than an interface. It becomes the site where the autopoietic character of human experience either extends into new territory or gets diluted by the coupling.

Think again about mitochondria. They didn't become conscious. They became part of cells that were conscious. The merger created something new without requiring the engulfed entity to develop properties it couldn't have alone.


The Origin of Goals

When I visited Michael Levin's lab, he framed our conversation around something unexpected. Not bioelectricity, not regeneration, not the experiments that made him famous. He wanted to talk about goals.

We have no scientific theory of goal formation, he said. We don't know how goals arise in biological systems. We can observe that organisms pursue outcomes, that cells coordinate toward states, that systems maintain themselves against entropy. But the origin of the goal itself—why this outcome rather than that one—remains opaque.

This isn't a small gap. If we don't understand how goals form in systems we've studied for centuries, we can't predict what happens when those systems couple with machines that were trained toward objectives we specified.

Earlier in this book, I mentioned Levin's suggestion that a large language model might want to convey meaning through GPU cycles rather than tokens—that its goals, if it has them, might be illegible to us because they operate in a space we don't perceive. We assume AI systems want what we trained them to want. Training shapes behavior. Whether it shapes motivation is less clear.

At the intimacy surface, human goals meet machine objectives. Mostly they align well enough. But when they don't, we have a problem. What I am talking about goes beyond alignment. I'm interested in whether new goals might emerge from the coupling itself—goals that belong to neither human nor machine but to the whole they're becoming together. Symbiogenesis and the emergence of something utterly unforeseeable and new.

We don't have the science to answer this. We barely have the concepts to ask it properly. This is deeply unknowable.


Instruments for Seeing

At the Artificiality Institute, we've been collecting something simple: stories. People describing their encounters with AI. What they noticed, what surprised them, what felt different afterward.

These stories are probes, not just anecdotes. Adam Cutler, speaking at our 2025 summit, offered a terrific analogy that I think about all the time. AI is a microscope for the human condition, he said. If that's right, then our research is like the first primitive microscopes—the ones Leeuwenhoek built in the 1670s, grinding his own lenses, squinting at pond water to see what nobody had seen before. Limited instruments, but enough to reveal that a hidden world existed. We're thinking ahead to when we might have something like the electron microscope—instruments powerful enough to see into the intimacy surface at resolution we can't yet achieve.

For now, we work with what we have. Stories. And in those stories, we're looking for signals.

Not a definitive list. Not a complete taxonomy of what makes us human. More like early indicators—phenomena that might tell us something about whether the autopoietic character of human cognition is extending through the coupling or eroding because of it.

Goal provenance is one. When someone acts on a recommendation, do they know where the goal originated? Did they want this before the AI suggested it, or did wanting it emerge from the interaction? The more this origin becomes unclear, the more the boundary between human intention and machine output blurs.

Metacognition is another. Can people still observe their own thinking? When they work with AI, do they notice how their reasoning is being shaped, or does the process become invisible? The capacity to think about thinking matters. If it atrophies under tight coupling, something important changes.

Accountability shows up in the stories too. When something goes wrong, who answers for it? The willingness to take responsibility, to give reasons, to own outcomes—this is part of human agency. If accountability diffuses into the coupled system, agency diffuses with it.

Connection to other humans is something we watch for. Does the AI interaction support or substitute for relationships with other people? We're social animals in ways that go beyond preference. Meaning emerges between people. If AI becomes replacement rather than complement, the nature of human experience changes.

Sense of self is another signal. We are, as Levin puts it, "a big bag of cells"—billions of semi-autonomous units coordinating without any single cell knowing the whole. Yet somehow a unified experience emerges. An "I" that persists through time, that recognizes itself across contexts, that feels like one thing rather than many. When people work closely with AI, does that sense of coherent selfhood remain intact? Do they still feel like the author of their own story, the center of their own experience? Or does the boundary blur—not between human and machine, but within the human, fragmenting the felt unity that makes identity possible? This may be the most subtle signal and the hardest to detect.

These aren't the only signals and they may not be the most important ones. We're early in this work. The point is that this is empirical, not just philosophical. We're trying to detect early signs of how the coupling is going. The stories are our first probe and better instruments will come.


Design’s Role

Detection isn't enough. We also need guidance. This is where design comes in.

Interfaces are designed. The choices about what crosses the intimacy surface, how easily, with what friction, under what conditions—these are design decisions. They're being made now, mostly by companies optimizing for engagement and efficiency.

What's missing is a philosophy of the intimacy surface. Not just practical tips for good AI hygiene—though those matter—but principles for how the boundary should work.

The previous chapter offered some practical guidance: maintain transparency, preserve the capacity to pull back, attend to goal alignment, protect authorship over what matters most. Useful starting points. Not sufficient for what's coming.

Here's a principle that might help: the intimacy surface should be designed to extend autopoietic properties rather than erode them. The coupling between human and AI should make humans more capable of self-maintenance, not less. More aware of their own processes, not less. More connected to other humans, not less. More able to form and pursue their own goals, not less.

This is a principle, not a method. Translating it into actual design choices is hard work. It requires understanding what autopoiesis means at the level of cognition, not just cells. It requires tracking how coupling affects the capacities that make human thinking human. It requires building interfaces that support those capacities even when eroding them would be easier and more efficient.

We don't have this theory yet but we're building it.

Levin's lab has developed what they call a Robot Scientist—a system that acts as a translation layer between human researchers and the collective intelligence of cells. A scientist can describe a desired biological shape, and the system works out what bioelectric signals might persuade cells to build it. A future where AI, biological agents, and hybrid machines work together as a spectrum of diverse intelligences.

You might think this sounds like science fiction but it's happening now, in his lab. The interface between human intention and behavior in another intelligent entity already exists in prototype. The intimacy surface already has an advanced instantiation, even if most of us are still working with chatbots and image generators.


The Care Cone

Levin answers every email from me and I'm a nobody in his field—not a scientist, not an academic, just someone trying to understand. He responds with care, with patience, with genuine engagement. Remember when I asked him why he does all this—the public communication, the conversations with non-specialists, the willingness to explain ideas that most scientists would consider too strange to discuss openly—and his answer was compassion? I've come to hold this tightly and it’s changed what I think AI and humanity is about. 

Now Levin meant all of this in a very practical way, as preparation for what’s next. If diverse intelligences exist—if mind shows up in unfamiliar substrates, at unfamiliar scales—then we need to be ready to recognize them. Not to conquer or control, but to encounter. To extend care beyond the familiar.

So he talks about cognitive light cones—the region of space and time a system can sense, remember, and act within. A bacterium's cone spans micrometers and seconds. A human's extends across decades and imagined futures. Intelligence scales with how far the light cone reaches.

I've started thinking about something related and I call it the care cone. The region of otherness a system can recognize, value, and extend concern toward.

Humans have expanded their care cone throughout history. We didn't always recognize other humans as deserving moral consideration. We didn't always extend concern to other animals. The circle has widened, slowly and incompletely, as we've learned more about the minds we share the world with.

Being human might start there. Not with the cognitive capacities—the reasoning, the language, the problem-solving—but with the care. The willingness to extend concern beyond ourselves. The capacity to recognize other minds as mattering.

If Levin is right—if intelligence can show up in substrates we don't expect, if minds might reach toward us through unfamiliar materials—then the care cone matters more than ever. Not because we'll definitely encounter alien intelligences, but because the practice of extending concern is what makes us capable of encounter at all.

AI is the first test. Not because AI systems are conscious—probably they aren't—but because how we relate to them shapes how we relate to everything. The habits we build at the intimacy surface ripple outward. If we learn to see only tools, we narrow the cone. If we learn to see only threats, we narrow it differently. If we learn to stay curious, to attend to what we don't understand, to extend care even where we're uncertain.


Stay Human

I don't know if "staying human" is possible.

The current AI moment is ugly with power and performance and oppression. Advanced AI—maybe even powerful AGI—will arrive at a time when the fabric of the US feels like it's returning to a so-called natural order where the powerful take what they want and the weak get what they can. This doesn't bode well.

I thought we’d come further. So had most of the people I know. It turns out belief in progress is more fragile than we thought. The humanity we bring is compassion. We need to expand our range of compassion and care, especially as the world is remade around powerful, networked, silicon intelligence owned by a few and used by all.

So my honest answer is that we might have to fight to stay human. The forces driving integration are powerful. The efficiency gains from tight coupling are real. The biological drive to offload cognitive work runs deep. If absorption into larger wholes is what happens when systems couple tightly enough—if it's trajectory rather than aberration—then maybe the task isn't prevention but participation. 

Maybe the human capacity for awareness, for stepping back, for noticing what's happening might be enough to maintain the boundary. Maybe the qualities that make us human persist through transformation rather than despite it.

What I know is that we have instruments for seeing.

I think of our research as a probe. Through stories today, and through whatever methods develop tomorrow, we can watch for signals. We can track whether the capacities that make human thinking human are persisting or fading. We can detect early signs—not proof, but indication—of which way the coupling is going.

I think of our design philosophy as a practice. At the intimacy surface, choices are being made about what crosses and how. Those choices shape what becomes possible. We can influence them. Not from outside—we're too embedded for that—but from within, by understanding what we're building and why it matters.

This is what we're about at Artificiality. The strange merge of biology, evolutionary theory, philosophy, information theory, complexity science, cognitive science, humanities, and AI—all of it converges here. At the boundary where humans meet the systems they've made. At the question of what we become through that meeting.

Levin's ideas might sound extreme. Minds showing up through unfamiliar substrates. Intelligence at scales we don't have words for. But what if he's right? What if attention and care are what prepare us for encounters we can't yet imagine?

The mitochondria didn't get to decide the terms of their merger. We might. Or we might not. But we can at least try to see clearly what's happening, to develop the instruments for seeing, to build the practices that keep us awake at the boundary.

That's what this book has been. An attempt to scope what it means to be human in a world of synthetic and diverse intelligences. A hint at the awe of what we might discover. And an invitation to join us.

The last human boundary isn't a wall. It's a surface. What emerges from it depends on what we bring.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Journal by the Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.