Book Release: The Artificiality | We're a 501(c)(3)
Happy February! From our home to the broader world, January 2026 is a month we'd rather not repeat.
Before asking what to preserve, we need to be clear about what we are.
Humans are finite. We know life will end. We don't know when. This strange combination—certainty without schedule—gives weight to everything we do. Choices matter because time runs out. Commitments cost us alternatives we won't get back. Procrastination has consequences because the window will close.
Humans are vulnerable. We get sick. We weaken. We break down. Our self-maintenance against entropy is constant and imperfect. A virus, an accident, a mutation in a single cell—any of these can end everything. This vulnerability isn't a flaw in the design. It's the condition of being a living system that must keep rebuilding itself to persist.
Humans are becoming. We exist in continuous time, carrying history forward as experience, not as stored data. What happened to you at seven years old is still shaping how you respond to criticism at forty. The continuity is lived, felt, accumulated in ways that can't be separated from the body that lived it. And we're not static within that continuity. We develop, transform, grow into versions of ourselves that didn't exist before.
Humans feel. Not just emotions, though those matter. We have valence—a basic sense of things going well or badly, experienced from inside. This isn't computation that could run on any substrate. It's tied to having a body that can be hungry, cold, injured, satisfied. The felt sense of how things are going is the foundation on which more complex emotions and judgments are built.
Humans suffer, sometimes by choice. We do not minimize discomfort at all costs. We accept strain, effort, and pain when they serve development, meaning, or commitment. Relief only exists in contrast to difficulty. Mastery requires resistance. Value is shaped by what it costs to achieve. This willingness to endure—rather than eliminate—difficulty is not irrational. It is a feature of a system that learns through consequence and transformation, not optimization alone.
Humans are spatial reasoners. Before we explain the world, we move through it. We learn cause and effect through action—by pushing, reaching, falling, recovering—not by running internal simulations. These bodily encounters teach us relationships like near and far, effort and resistance, sequence and consequence. Abstract thought grows out of this substrate. Even our most conceptual ideas rely on spatial structure: foundations, paths, boundaries, balance. Language names these relationships, but it does not create them. We understand first by navigating, then by describing.
Humans are embodied intelligences. Thinking is distributed across a body that senses itself from the inside as well as the outside. Interoception carries information—tension, ease, hunger, fatigue—that does not pass through the same cognitive filters as conscious reasoning. We often know before we can articulate why. This is not bias-free knowledge, but it is grounded knowledge, shaped by a system that must regulate itself continuously to stay alive. We learn through exposure and consequence, not detached simulation. The body keeps score long before the mind explains.
Humans need other humans. This isn't preference. It's structure. We don't generate meaning in isolation. Meaning emerges between people who depend on each other, argue with each other, inherit problems from each other, leave unfinished work for each other. A brain raised without human contact doesn't develop into a human mind. The social world isn't an environment we happen to live in. It's a constitutive condition of becoming who we are.
Humans are irreducibly individual. Each of us faces our own death, sees from our own perspective, makes our own choices. No one else can occupy your position in the world. This singularity exists alongside our social nature without contradiction—we are formed by others and yet remain the only one who will live this particular life.
Humans are curious. We reach toward what we don't understand. Not because we're forced to, but because the unknown pulls at us. The mountain is there, so we climb it. The question is open, so we pursue it. This drive to know, to explore, to find out what happens next—it predates any practical benefit and often ignores practical cost.
Humans imagine. We hold in mind what doesn't exist. We project possible futures, construct alternatives to the present, envision what could be built or changed or tried. This capacity to think beyond current reality is how we plan, how we hope, how we create. It's also how we worry. The same faculty serves both.
Humans initiate. We act from internal motivation, not only in response to external pressure. We begin things. We take on projects that no one assigned. We persist when persistence serves no immediate reward. This capacity for self-starting—call it will, gumption, drive—is the engine that makes everything else move.
Humans reach beyond themselves. We have peak experiences. We feel connection to something larger—a cause, a community, a sense of meaning that exceeds individual survival. This isn't mysticism or wishful thinking. It's a documented feature of human experience that Maslow called transcendence. We seek more than continuation.
Humans sit with what isn't yet clear. We encounter the ambiguous, the unformed, the not-yet-sayable. We don't just process information that arrives already structured. We participate in the structuring. We make sense of situations that don't come with labels, and the sense we make becomes part of the situation.
Humans remain accountable. When someone asks why we did what we did, we can respond. We can give reasons, even bad ones. We can take ownership of choices, even mistaken ones. We ask what a good life is. We hold each other responsible. We expand our circles of concern—or we fail to, and that failure is itself a moral fact. This answerability is authorship. It's the difference between being an agent and being a process.
I list these not because they're unfamiliar but because they're easy to forget. When you spend enough time thinking about what AI can do, the distinctiveness of what humans are starts to blur. The blur is dangerous. It makes it harder to see what's at stake when these systems enter our lives.
For most of human history, intelligence was scarce. If you wanted an explanation, you found someone who knew. If you wanted a plan drafted, you learned to draft plans or hired someone who could. Cognitive labor took time, training, and social access. The capacities that made someone valuable were bundled together: knowledge, judgment, articulation, care.
AI unbundles them. Explanation is now cheap. So is articulation. So is the appearance of reasoning. What remains scarce is different: the capacity to know which explanation matters, to stand behind a judgment, to care about the outcome.
Consider what happens to agency when suggestions are everywhere. AI systems propose next steps, draft plans, surface options. The work shifts from generating possibilities to choosing among them—and choosing well requires knowing what you actually want, which requires having done the slow work of figuring that out. Agency moves upstream, toward the formation of goals rather than their execution.
Responsibility gets harder to locate. When outcomes emerge from models, prompts, defaults, and feedback loops, the temptation is to treat results as things that happened rather than things someone chose. But diffuse causation doesn't eliminate accountability. It just makes accountability require more effort to claim.
Meaning changes texture. AI systems generate plausible narratives. They articulate reasons fluently. But they have no stake in which narrative gets lived by or what it costs to live by it. When content is abundant, meaning has to come from somewhere else—from commitment that carries consequence, from choices that close off alternatives.
Connection resists the shift entirely. AI can participate in dialogue, but dialogue isn't the same as relationship. Relationships are built from mutual obligation. Someone can let you down. You can let them down. That structure of accountability and care doesn't scale through better language models. It requires people who owe each other something.
Humans are finite, vulnerable, embodied, social, curious, and accountable. These aren't incidental features. They're what biological and social evolution produced over billions of years.
When intelligence becomes abundant, these characteristics don't become less important. They become more so. Agency moves upstream. Responsibility must be actively reclaimed. Meaning takes on different texture when content is cheap. The capacities that matter most are precisely the ones that resist automation.
Of everything on this list, finitude is foundational. It sits underneath the others. Without the certainty that life will end, vulnerability would be inconvenience rather than condition. Choice would lack weight. Commitment would cost nothing. Curiosity would have no urgency. Meaning would have no stakes.
The next chapter takes finitude seriously—not as a limitation to overcome but as the structure that makes everything else possible.