Book Release: The Artificiality | We're a 501(c)(3)
Happy February! From our home to the broader world, January 2026 is a month we'd rather not repeat.
Every morning I open a conversation with an AI system. I describe what I'm working on. I ask questions. I push back on answers that seem wrong. The AI responds, adjusts, tries again. By the end of the session, I've produced something I couldn't have produced alone.
This interaction has a boundary. I'm on one side. The AI is on the other. Information crosses between us. The boundary isn't a wall. It's more like a membrane—selective, dynamic, responsive to what passes through it.
I've started calling this the intimacy surface. The term comes from thinking about what makes some AI interactions feel productive and others feel hollow. The difference isn't just capability. It's about what crosses the boundary and how that crossing changes both sides.
This chapter is about that surface—what it is, what crosses it, and why its design matters for what we become.
Before getting into mechanics, I want to step back.
For as long as there have been humans, there have been tools. Fire reshaped our bodies and social lives. Language reshaped memory and coordination. Writing externalized thought. Print reorganized knowledge. Clocks changed how time itself was felt.
None of these made us less human. Each altered what it meant to be human by changing what we could rely on, what we had to practice, and where effort was required.
Humanity has never been a fixed essence preserved by resisting change. It has been an ongoing achievement, produced through tools that extend our capacities while demanding new forms of judgment in return.
AI belongs in that lineage. It doesn't mark the first time cognition has been externalized. What distinguishes this moment is that intelligence itself—explanation, synthesis, articulation—has become cheap. Capabilities that once required time, training, and social standing are now available on demand.
When that happens, the locus of human value alters. Not because humans are diminished, but because the environment has changed.
Humberto Maturana and Francisco Varela, two Chilean biologists, developed a concept in the 1970s that helps here. They called it autopoiesis—self-production. A living cell makes and maintains its own boundary. The membrane isn't imposed from outside. The cell builds it, repairs it, regulates what passes through. The boundary is part of what makes the cell a cell.
This idea has a mathematical form. Karl Friston, of the free energy principle and active inference theories we met in the chapter on Minds Without Brains describes how a Markov blanket—a statistical boundary between a system and its environment—can make math out of a cell boundary. Everything inside the blanket can only know what's outside through the states at the boundary. Your skin works this way. Sensory receptors at the surface translate external conditions into internal signals. You don't experience the world directly. You experience what crosses your boundary.
When I interact with an AI, a boundary forms between us. The conversation itself becomes the membrane. My prompts cross into the system. Its responses cross back. What I choose to share, how I phrase my questions, what feedback I provide—these shape what the AI can know about me. What it generates, how it responds, what patterns it reinforces—these shape what I take away.
The intimacy surface is where the capacities I just described either get preserved or eroded.
Start with information. I tell the AI what I'm working on. I share drafts, ideas, problems I'm stuck on. The more context I provide, the more useful the responses become. The AI builds a model of my project, my style, my goals.
This is useful. It also creates something: a record of my thinking, encoded in a system I don't control. The information that crosses doesn't disappear. It becomes part of how the AI responds.
Then there's intention. I come to the conversation with purposes. I want to clarify my thinking, generate options, check my reasoning. The AI doesn't have intentions the same way—it predicts likely continuations based on patterns—but its outputs shape what I do next. My intentions meet its predictions. Something emerges from the collision.
Trust crosses the boundary too. Early in a working relationship, I check everything. I verify claims, question reasoning, maintain skepticism. Over time, if the AI proves reliable, I check less. I start to trust. This efficiency gain is also a vulnerability. Misplaced trust means errors propagate. The boundary becomes more permeable as trust increases.
Finally, meaning crosses. When I work with an AI on something that matters to me—writing this book, for instance—the interaction isn't just functional. It becomes part of how I understand my own project. The AI's responses shape my thinking. My thinking shapes its responses. Meaning gets constructed in the exchange, not delivered from one side to the other.
Tight coupling brings fluency. The more an AI knows about me, the more smoothly our interactions go. It anticipates my needs. It speaks in my vocabulary. The conversation flows.
Tight coupling also blurs authorship. When I write with AI assistance, which ideas are mine? The question sounds philosophical. It has practical stakes. If I can't distinguish my thinking from the AI's suggestions, I lose track of what I actually believe. The boundary that defines me as a thinker becomes unclear.
I've noticed this in my own work. After a long session, I sometimes can't remember which phrases I wrote and which the AI proposed. Usually this doesn't matter. Sometimes it does. When the words express my values or commitments, I want them to be mine. Fluency can become a kind of self-loss.
There's also the question of goals. I come to the conversation with purposes. The AI was trained with objectives—predicting text, following instructions, being helpful. These goals aren't identical. Usually they align well enough. Sometimes they don't.
Michael Levin's insight applies here: to work with an intelligent system, you have to be willing to be changed by it.
This sounds abstract. It's practical advice. If you approach AI as a tool serving fixed purposes, you'll miss what it can offer. The interesting possibilities emerge when you let the interaction reshape your thinking—when you follow unexpected suggestions, consider perspectives you wouldn't have generated, allow your goals to change in response to what you learn.
The same principle applies in reverse. The AI changes through interaction too. Within a conversation, it adapts to your patterns. Across conversations, usage data shapes future training. The influence flows both ways. As our friend and accomplished AI designer Josh Lovejoy points out, “meaning is always a negotiation.”
This bidirectional influence is what makes the relationship potentially symbiotic. Both sides can benefit. Both sides can grow. The key word is "can." Symbiosis isn't guaranteed. Parasitism is also possible—one side extracting value while the other is depleted.
I think a lot about this. One factor is transparency about what crosses the boundary. When I share information with an AI, I should know where it goes, how long it persists, who can access it. Current systems are often opaque. The boundary is permeable but illegible.
Another factor is maintaining the capacity to pull back. Tight coupling feels good when it works. When it stops working, I need to be able to disengage—to revert to doing things myself, to remember how I thought before the AI was involved. Preserving independence takes effort. The efficiency drive pulls toward more integration not less.
A third factor is attention to goal alignment. Whose purposes is the system serving? An AI trained to maximize engagement might subtly encourage dependence. An AI trained to be helpful might reinforce whatever I already believe. The goals embedded in training shape the relationship in ways that aren't always visible.
A fourth factor is knowing which work you want to remain yours—values and identity, authorship.
Esther Perel, who writes about human relationships, has a phrase that applies here: the space between.
Healthy intimacy isn't merger. It's connection that preserves difference. Two people in a good relationship remain distinct individuals. They influence each other, shape each other, grow through their connection. But they don't collapse into sameness. The space between them is where the relationship lives.
AI relationships need something similar. Tight coupling without distinction produces dependence. No coupling at all wastes the potential. The goal is connection that enhances both sides while preserving what makes each side valuable.
This means maintaining boundaries even as those boundaries become more permeable. It means knowing what you bring that the AI doesn't have: embodied experience, continuous existence through time, values that emerged from a life actually lived, the capacity to care about outcomes in ways that aren't reducible to optimization.
It also means recognizing what AI brings that you don't have: pattern recognition across scales you can't perceive, memory that doesn't decay, processing that doesn't tire, access to knowledge you haven't acquired.
The intimacy surface is where these different capacities meet. The design of that surface shapes what the relationship becomes. Your choices about how to work with AI aren't just personal preferences. They're contributions to an emerging culture.
The practices you develop, the boundaries you maintain, the ways you preserve your own agency—these spread through imitation and teaching. They become part of how your community relates to AI. The intimacy surface you create is a small part of a much larger surface forming across the entire culture.
The intimacy surface is where everything this book has argued about comes to ground. Finitude meets a system that doesn't run out of time. Embodied intelligence meets pattern recognition at inhuman scale. The need for other humans meets a system that can simulate dialogue without owing anyone anything. Authorship meets fluency that makes the question of origin hard to track.
Your choices about how to work with AI aren't just personal preferences. They're contributions to an emerging culture.
The practices you develop, the boundaries you maintain, the ways you preserve your own agency—these spread through imitation and teaching. They become part of how your community relates to AI. The intimacy surface you create is a small part of a much larger surface forming across the entire culture.
This is where co-evolution becomes concrete. Not in policy papers or ethics frameworks or long-term forecasts. Here in the accumulation of small encounters that train your habits and expectations as surely as they train the model's weights.
The intimacy surface is where everything this book has argued about comes to ground. Finitude meets a system that doesn't run out of time. Embodied intelligence meets pattern recognition at inhuman scale. The need for other humans meets a system that can simulate dialogue without owing anyone anything. Authorship meets fluency that makes the question of origin hard to track.
None of this is settled. The surface is forming now, shaped by design choices and usage patterns and cultural expectations that are still in motion. What it becomes depends on what we build into it and what we demand from it—and on whether we remain conscious of the encounter or let it fade into background.
The final chapter confronts the hardest question directly. We've always been absorbed into larger wholes—corporations, cities, languages. AI accelerates this and makes it cognitive. Can we stay human inside that process? We don't know. But we can develop instruments to see what's happening—probes for detecting whether the qualities that make us human are persisting or fading. And we can develop principles for designing the intimacy surface so that human capacities extend rather than erode.
The question isn't whether AI will change human life. It already has. The question is whether we participate in that change with awareness or drift through it on autopilot.