The Artificiality: AI, Culture, and Why the Future Will Be Co-Evolution

2. The Clean Categories

Most people carry around a mental map of how minds work. The map isn't usually explicit. It operates in the background, shaping intuitions about what's possible and what isn't. But if you asked someone to draw it, it would probably look something like this:

Intelligence lives in brains. Human brains are the most intelligent, followed by other animals in rough order of how similar they are to us. Consciousness—the felt experience of being someone—happens inside skulls. It's private, interior, and tied to the biological machinery that produces it. Computers can do impressive things, but they don't actually understand anything. They process symbols according to rules. They compute, but they don't think. AI is a tool, like a very sophisticated calculator. Useful, sometimes impressive, but fundamentally different from minds.

This map has a certain tidiness. Everything has its place. Brains here, computers there. Biology in one box, technology in another. Mind as a phenomenon that emerged late in evolution, reached its peak in humans, and remains confined to the carbon-based systems that evolved to produce it.

The map isn't crazy. It captures something real about the differences between a laptop and a person. It reflects genuine intuitions about what it's like to be conscious versus what it's like to be a thermostat. It served well enough for building institutions, funding research, and organizing education. Neuroscience studies brains. Computer science builds machines. Cognitive science sits somewhere in between, borrowing from both but not quite belonging to either. Philosophy asks the questions nobody else wants to touch.

For most of the twentieth century, this arrangement worked. It produced knowledge. It built technologies. It answered some questions and generated better versions of others.

But the map contains assumptions that got treated as facts. And those assumptions are now breaking down.


The Hierarchy of Minds

Start with the idea that intelligence forms a ladder. At the top, humans. Below us, the great apes. Then other mammals, birds, fish, insects, descending into creatures so simple they barely count as having behavior at all. This hierarchy feels natural. It matches our intuitions about who matters and who can be safely ignored.

The ladder model has a long history. Aristotle arranged living things on a scala naturae—a great chain of being that placed humans just below angels and God. Medieval thinkers elaborated the scheme. Even after Darwin, something like the ladder persisted. Evolution became a story of progress, with humans at the current peak. Intelligence was the measure, and we held the measuring stick.

The problem is that evolution doesn't work this way. Darwin's insight was precisely that there's no ladder—only a branching tree, with every living species equally evolved, each adapted to its own circumstances. A bacterium isn't a failed attempt at being human. It's a spectacularly successful solution to the problem of being a bacterium.

But the ladder intuition proved hard to shake. It got built into how we study minds. Comparative psychology for decades focused on asking whether other animals could do what humans do: use tools, recognize themselves in mirrors, learn language, deceive others. The tests were designed around human capacities. Animals that passed got upgraded on the ladder. Animals that failed got left below.

This approach systematically missed what other animals actually do. Bees communicate the location of food through dance. Birds cache thousands of seeds and remember where they put them. Octopuses solve problems using a nervous system distributed across eight semi-autonomous arms. These aren't deficient versions of human intelligence. They're different kinds of intelligence, shaped by different evolutionary pressures, solving different problems.

The hierarchy assumption distorted AI research too. For decades, the goal was to replicate human cognition—to build machines that reasoned the way we (thought we) reasoned. This meant symbolic logic, explicit rules, knowledge represented in forms humans could inspect and understand. It meant intelligence as something that looked like a philosophy seminar: abstract, verbal, disembodied.

The machines that resulted could prove theorems and play chess. They couldn't recognize faces, walk across a room, or understand a simple story. The things humans find easy turned out to be hard. The things humans find hard turned out to be easy. This was called Moravec's paradox, after the roboticist Hans Moravec. But as Arvind Narayanan has recently argued, the paradox was never empirically tested. It may reflect which problems AI researchers chose to work on, not any deep truth about intelligence. Ignore the tasks that are easy for both humans and machines, ignore the tasks that are hard for both, and of course what remains looks like an inverse relationship.

This matters because the paradox shaped expectations. It suggested that reasoning and logic were the easy part—get those right and the rest would follow. But reasoning in open-ended domains requires exactly the capacities that were supposed to be hard: common sense, embodied knowledge, the accumulated context that biological systems carry without trying. The distinction between "easy" and "hard" problems may have been an artifact of the research program all along. The map was mistaken for the territory, again.


The Brain as Computer

An assumption embedded in the map is that brains are biological computers. Mind is software while brain is hardware. Consciousness is what computation feels like from the inside.

This idea has roots in the 1940s, when the mathematician Alan Turing formalized what computation means and the neurophysiologist Warren McCulloch, working with the logician Walter Pitts, showed that networks of simplified neurons could perform logical operations. If neurons compute, and if computation can be abstracted away from any particular physical implementation, then minds might be substrate-independent. The same mental software could run on carbon or silicon. Upload your brain to a computer, and you'd still be you.

The brain-as-computer metaphor proved extraordinarily productive. It gave cognitive science a vocabulary: input, output, processing, storage, retrieval. It made mental operations seem tractable. You could study memory as a kind of filing system, attention as a kind of filter, decision-making as a kind of calculation. The metaphor generated experiments, funded labs, filled journals.

It also built the AI industry. If minds are programs, then building a mind is a programming problem. Get the algorithm right and consciousness will follow. This assumption underlies every prediction that artificial general intelligence is just around the corner. It underlies the dream of mind uploading. It underlies the casual confidence that sufficiently advanced AI will inevitably be conscious.

But metaphors can mislead. The brain-as-computer image imports assumptions that may not hold.

Computers have a clean separation between hardware and software. The same program runs on different machines. The algorithm is independent of its physical implementation. This separation is a design choice, imposed by human engineers following Turing's principles. It makes computers useful—you can swap out the hardware without losing your files. But it's not a law of nature. It's an engineering decision.

Brains don't have this separation. You cannot extract the "program" running in a brain and move it to different biological tissue, let alone to silicon. The activity of the brain is inseparable from its material structure. Neurons are maintaining themselves, regenerating their membranes, adjusting their connections based on experience. They do more than process signals. The computation, if you want to call it that, is bound up with the metabolism, the chemistry, the physical substrate at every level.

Computers operate in sequence-time: first this step, then that step, then the next. The duration between steps doesn't matter. An algorithm is the same algorithm whether it runs in microseconds or millennia. What matters is the sequence, not the timing.

Brains operate in continuous physical time. They cannot step outside the flow. Every moment, they're resisting entropy, maintaining the conditions for their own continued existence. The timing isn't incidental—it's constitutive. Consciousness, whatever it is, seems to flow rather than stutter from state to state.

None of this proves that minds couldn't exist in silicon. But it suggests that the easy confidence of the brain-as-computer metaphor deserves scrutiny. The assumption of substrate independence isn't a finding but an assumption. And for some, the hope by which they plan the immortality of their individual consciousness.


The Location of Consciousness

The map puts consciousness inside skulls. This seems obvious. You're conscious. Your experience happens to you. It seems to be located roughly behind your eyes, or maybe diffused through your head. When the brain dies, consciousness stops. What more is there to say?

Quite a bit, it turns out.

The assumption that consciousness is brain-bound makes it hard to ask certain questions. If a patient is in a vegetative state, are they conscious? The brain is damaged but not dead. Some activity continues. Is anyone home? For years, the answer was assumed to be no. Then Adrian Owen and his colleagues showed that some vegetative patients could respond to commands through brain imaging. Asked to imagine playing tennis, their motor cortex lit up. Asked to imagine walking through their house, their spatial navigation regions activated. They couldn't move or speak, but they could answer yes-or-no questions by imagining different activities.

This didn't prove they were fully conscious. But it cracked the assumption that behavioral unresponsiveness means experiential absence. Consciousness might persist where we can't detect it through ordinary means.

The brain-bound assumption also makes it hard to think about consciousness in other animals. Do dogs have conscious experiences? Most pet owners would say obviously yes. But the scientific establishment was cautious for decades, warning against anthropomorphism, demanding behaviorist rigor. Only recently have researchers felt comfortable attributing rich inner lives to other mammals, to birds, to octopuses. The evidence was always there but our models made it hard to see.

And the brain-bound assumption makes it nearly impossible to think about consciousness at other scales. Could a cell be conscious? The question sounds absurd. Cells don't have brains. But they do respond to their environment, process information, make decisions about what to do next. If consciousness is about information processing, why would it require a brain? If it's about something else, what is that something else?

These questions don't have easy answers. But the old map discouraged even asking them. It drew a boundary around skulls and declared the interior the only place consciousness could be.


AI as Tool

The final clean category: AI is a tool. We build it, we control it, we use it for our purposes. It doesn't have purposes of its own. It doesn't understand what it's doing. It manipulates symbols according to rules, but the symbols don't mean anything to it. The meaning is in our heads, not in the machine.

This view has a respectable philosophical pedigree. In 1980, the philosopher John Searle proposed a thought experiment called the Chinese Room. Imagine a person who doesn't speak Chinese, locked in a room with a rulebook. Chinese characters come in through a slot. The person looks up the appropriate response in the rulebook and passes Chinese characters back out. To an outside observer, the room appears to understand Chinese. But the person inside doesn't understand anything. They're just following rules.

Searle's point was that computation—rule-following symbol manipulation—isn't sufficient for understanding. A computer running a language program is like the person in the room. It processes symbols but doesn't grasp their meaning. It simulates understanding without having any.

The Chinese Room argument has been debated for four decades. Critics argue that the person doesn't understand, but the whole system—person plus rulebook plus room—might. Defenders respond that adding more components doesn't add understanding. The argument continues.

But whatever its philosophical merits, the Chinese Room captured an intuition that shaped how people thought about AI. The machines aren't really thinking. They're just very fast rule-followers. They're tools, not minds.

This intuition became harder to maintain as AI systems began doing things that didn't look like rule-following. Deep learning systems don't have explicit rulebooks. They learn from examples, adjusting millions of parameters until they can recognize faces, translate languages, or generate plausible text. Nobody programmed the rules. The rules, if you can call them that, emerged from training.

When a large language model produces a coherent essay on a topic it was never explicitly trained on, something is happening that the Chinese Room doesn't quite capture. It's not looking up responses in a rulebook. It's generating them based on patterns learned from vast amounts of text. Whether that constitutes "understanding" remains contested by some. But the clean distinction between tools that follow rules and minds that grasp meaning has gotten less clear.


The Productive Paradigm

I've been describing the old map in terms of its limitations. But I want to be fair to it. The clean categories were useful simplifications that enabled progress.

If you want to study the brain, it helps to bracket questions about consciousness and focus on neurons, circuits, and behavior. That's what neuroscience did, and it learned an enormous amount. If you want to build AI systems, it helps to bracket questions about whether they really understand and focus on getting them to perform useful tasks. That's what the AI industry did, and it produced systems of genuine capability.

Disciplines advance by narrowing their focus. You can't ask every question at once. The clean categories created manageable research programs. They allowed specialization, funding, careers, progress.

The problem is that simplifications, if left unexamined, harden into assumptions. The map starts to feel like the territory. The boundaries between disciplines, drawn for practical convenience, start to seem like boundaries in nature. The questions that don't fit any discipline fall through the cracks.

What happens when intelligence doesn't live only in brains? When consciousness isn't cleanly located? When AI systems stop looking like tools and start looking like participants?

The old map doesn't have room for these questions. It doesn't have a place for cells that remember, tissues that have goals, machines that learn from biological patterns. It doesn't have a framework for minds that exist at multiple scales, or for interfaces between humans and AI that reshape both.

That's why the map is changing. Not because someone decided to redraw it, but because the phenomena stopped fitting the boxes.


Where We Are and What Comes Next

So far we've done one thing: looked closely at the map most of us carry without realizing it.

That map treats intelligence as a hierarchy with humans at the top. It treats brains as biological computers running mental software. It treats consciousness as something that happens inside skulls and nowhere else. It treats AI as a tool that manipulates symbols without understanding them.

None of these ideas are crazy. They captured something real. They organized research, built industries, answered questions. But they were always simplifications, and simplifications have a shelf life.

The next chapter examines the deepest assumption underlying that map: that brains are computers and computation is substrate-independent.

Alan Turing's model of computation shaped how we think about minds for decades. It promised that algorithms—abstract sequences of operations—could run on any physical system that reliably distinguishes states. If minds are algorithms, then minds could run on silicon as easily as carbon.

But living systems compute differently than machines do. They maintain themselves. They exist in continuous time. They carry developmental history that can't be separated from their function. Understanding that difference matters for everything that follows—including whether AI systems, trained on biological data, actually learned what life knows or just absorbed its shadows.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Journal by the Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.