I started writing about AI a decade ago with a sense that something important had changed. I couldn't name it precisely. The language available at the time—disruption, automation, existential risk—didn't capture what I was noticing. It was subtler than that. Intelligence seemed to be showing up in new places, distributing itself in ways that didn't match the categories I'd inherited.

Back then I was in a kind of romance with Silicon Valley. The people building these systems seemed to be asking the right questions, even when their answers made me uneasy. I wanted to believe that the same ingenuity that created the technology would figure out how to steer it well.

That romance is well and truly over. What replaced it isn't despair. It's something harder to name—a clearer view of the forces at play and a conviction that the default trajectory serves power more than it serves people. The breathless hype, the apocalyptic warnings, the promises of post-scarcity: these all share a common feature. They treat the future as something that will happen to us rather than something we are making together, choice by choice, interface by interface, conversation by conversation.

I didn't want to become an activist. The word still doesn't fit. But I've come to see that waiting for better answers from the people building these systems is its own form of passivity. What we're trying to build now is a community—people who want to make sense of this together, who reject both the hype and the doom, who believe that how we talk about AI shapes what AI becomes.

I wrote this book with AI assistance. I want to be direct about that because it taught me something important.

The process felt like partnership in a way that earlier versions of this technology did not. I could think out loud with Claude, push against ideas, test framings, ask for what I was missing. Sometimes I took the suggestions. Often I rejected them. What I learned is that frustration with AI is usually frustration with yourself. The machine reflects your own lack of clarity back at you with unnerving precision.

This might sound like a case for offloading more. It isn't.

Writing forces organization. When I couldn't explain what I meant to Claude, it was because I didn't know what I meant. That friction was useful. The temptation was to let the model smooth over what I hadn't figured out. Resisting that temptation was the work.

Staying author of your own mind turns out to be a practice, not a position. It requires noticing when you've stopped thinking and started accepting. It requires catching the moment when fluent output substitutes for genuine understanding. The technology makes this harder to notice, not easier.

The honest version of what AI taught me about writing is this: it can help you say what you already think more clearly. It cannot think for you. And when you let it try, you can feel something important slipping away.

What do I believe now that I didn't believe when I started?

I see intelligence as one of many kinds. The idea of diverse intelligences changed how I think about everything from cells to ecosystems to machines. Intelligence isn't a single spectrum with humans near the top. It's a space of different solutions to different problems, operating at different scales, with different values embedded in the process.

I see the human need for other humans more clearly. We are social animals in a way that goes beyond preference. Meaning emerges between us, not inside us. This feels obvious when I write it, but it has real implications for how we should think about AI. No synthetic system, however fluent, can substitute for the relationships that make us who we are. The question isn't whether AI can replace human connection. The question is whether we'll design systems that support it or erode it.

I'm much more interested in morality now than when I started. How humans form their values, how they express them together, how they expand their circle of concern as their knowledge expands—this is the human project. What I've watched with alarm is how that circle might contract under current conditions. The power structures, the incentives, the culture of the technology industry—all of it pushes toward optimization rather than expansion, efficiency rather than care.

I'm not religious. Nature and natural phenomena are my sources of awe and wonder. But I now see that natural phenomena are stranger than I imagined. Minds ingressing through unfamiliar substrates. Coherence emerging at scales we don't have words for. The world is weirder than the mechanistic picture I inherited, and AI is part of what revealed that weirdness.

I think about what Martin would make of this moment constantly.

He would reject the science fiction framing immediately. The Terminator scenarios, the uploading fantasies, the robot servants—he'd wave all of it away as lazy thinking dressed up in cultural anxiety. He'd want to start from first principles. What do these systems actually know that we don't? How are they "telling" us? What happens in the countless interactions, and how can we see it?

Once, years before any of this, he asked me if I'd accept a pig kidney if I needed one. I said of course—it does the same thing. He agreed. But he pointed out that most patients he talked to, people who actually needed kidneys, were repulsed by the idea. Dirty animals. Religious prohibitions. Ethical objections of various kinds.

I realized then that I have a functionalist streak. If something works, if it does what it needs to do, the substrate matters less to me than it does to most people. That intuition runs through this book. I don't think consciousness requires carbon. I don't think intelligence requires a body. I don't think understanding requires lived experience.

But I also think these positions are less settled than either side of the debate admits. Martin would have wanted to sit with the uncertainty rather than resolve it prematurely. He would have been interested in machine consciousness not as a thing to prove or disprove but as a question that reveals how little we understand consciousness in the first place.

He understood something else instinctively. Evolution isn't a force that happens to organisms. It's a process organisms participate in, reshaping the conditions of their own development as they go. The loop closes on itself.

We are in that loop now with AI. We made these systems. They are remaking us. And the terms of that remaking are still open, still contested, still ours to influence if we choose to try.

When I look for hope, I don't find it in the promises of the industry. I find it in specific people doing specific work.

James Evans at the University of Chicago studies how scientists actually discover new things. Not how we mythologize it, but how ideas move, combine, stall, and occasionally break through. On our podcast, we asked if scientific progress is really slowing down or are we measuring the wrong things? 

His research suggests that AI doesn’t replace creativity. It changes the space of possibilities. It makes combinations visible that humans wouldn’t arrive at on their own.His research shows that AI can help humans think in ways they wouldn't have thought alone—not by replacing creativity but by expanding the space of what's combinable. 

What gives me hope is seeing this play out in real work. Scientists using AI to notice patterns they genuinely couldn’t see before. Not because the machine is smarter, but because it reshapes how attention and exploration are distributed. James doesn’t talk about lone geniuses or silver bullets. He talks about teams, incentives, diversity of thinking, and the conditions that allow new ideas to survive long enough to matter. 

Our summit gathers people who want to make sense of this together. Researchers, designers, artists, skeptics, believers—all of them grappling with the same questions from different angles. The conversations didn't resolve anything. But they demonstrated that people hunger for this kind of sensemaking. They want to think together. They want to belong to something larger than the default narrative.

The fact that we need each other is itself a source of hope. The problems we face are collective. No individual, however brilliant, will solve them alone. No technology, however powerful, will solve them without us. These are cooperation problems. Cooperation is hard. It's also the only thing that has ever worked at this scale.

I know what I don't want readers to take from this book.

I don't want you to think this is all inevitable. The AGI scenarios, the mass unemployment, the extinction risks—these are not physics. They are stories, told by people with particular interests, shaped by assumptions that deserve scrutiny. The future is not written.

I don't want you to think the technology is neutral. It amplifies whatever system you put it into. If the system is built on extraction and short-term optimization, the technology will accelerate that. If the system is built on care and long-term thinking, the technology could accelerate that instead. The machine doesn't decide. We do.

I don't want you to think you have to choose between wonder and worry. This is one of the most profound changes to humanity's status and self-understanding since the Enlightenment. It is worth taking seriously, which means holding both the awe and the concern without collapsing into either.

What I hope you take instead is a different way of seeing.

You are part of AI's development whether you notice it or not. Every time you use these systems, you contribute—not just technically, through feedback loops, but culturally, through the normalization of certain practices and the foreclosure of others.

The question is not whether you'll participate. The question is whether you'll participate with awareness.

I want you to have different conversations. With friends who dismiss this as hype. With colleagues who embrace it uncritically. With family members who are anxious and don't know why. The quality of these conversations matters. They are where culture gets made.

I want you to join a community of people who are trying to figure this out together. Not a community with answers—a community with better questions. We are building that at Artificiality. You're invited.

I want you to stay author of your own mind. Not in the sense of rejecting assistance. In the sense of noticing when you've stopped thinking. In the sense of taking responsibility for what you believe and why. In the sense of refusing to let fluency substitute for understanding.

We are at a boundary now. On one side, the world we inherited: human intelligence as the measure, human meaning as the ground, human finitude as the condition of everything that matters. On the other side, something we don't have good names for yet. A world where intelligence shows up in unfamiliar forms. Where meaning gets made in collaboration with systems that don't share our constraints. Where the boundaries of self and other become questions rather than givens.

I want to close by returning to what we are.

Finite. We know we will die. We don't know when. This gives weight to what we do.

Vulnerable. We break down. We get sick. A virus, an accident—any of these can end everything.

Becoming. History carried forward in bodies. We develop, transform, grow. Not static.

Feeling. Things go well or badly, experienced from inside, tied to bodies that can be hurt and healed.

Social. Meaning emerges between people. A brain raised without human contact doesn't develop into a human mind.

Individual. Each faces own death, own perspective, own choices. Irreducibly singular despite being constitutively social.

Curious. Reach toward what we don't understand. The mountain is there, so we climb it.

Imaginative. Hold in mind what doesn't exist. Project possible futures.

Self-initiating. Act from internal motivation. Begin things. Persist without external reward.

Transcendent. Connection to meaning beyond individual survival. Peak experiences. Something larger.

Tolerant of ambiguity. Sit with the unformed. Participate in sense-making before structure exists.

Accountable. Give reasons. Take ownership. Hold each other responsible. Remain the authors of what we do.

None of this changes because AI exists. All of it becomes easier to forget.

The task is not to protect some imagined pre-technological humanity. Humans have always changed themselves through tools. The task is to remember what we are while we change, to carry forward what matters even as new capabilities arrive, to remain participants in our own transformation rather than subjects of someone else's optimization.

What comes next is up to us.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Journal by the Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.