The Artificiality: AI, Culture, and Why the Future Will Be Co-Evolution

4. Minds Without Brains

A few weeks before our visit to Michael Levin's lab, he had been running experiments on algae. Single-celled algae. No nervous system. No brain. Just cells floating in water, responding to light.

Levin exposed them to light pulses. Some pulses followed predictable patterns. Others were random. The total amount of light was the same in both conditions. Only the structure differed. Predictable versus unpredictable.

The algae responded differently. They preferred the predictable patterns. Given a choice, they moved toward regularity and away from randomness. Algae, it turns out, don't like surprise.

This sounds like a minor finding. It isn't. The preference for predictability is a signature of something called active inference—a framework for understanding how biological systems minimize uncertainty about their environment. Active inference predicts that living systems will seek out conditions that match their expectations and avoid conditions that don't. The theory has been applied to brains, to behavior, to cognition. Seeing it in algae extends the principle far down the tree of life, into organisms with no neurons at all.

When Levin told us about this experiment, I could only say “wow.” Here was intelligence—or something close enough to deserve the word—operating in a system we would never have thought to look for it. The algae were tracking patterns, not just reacting to light. They had preferences. They were acting to reduce surprise.


Memory Everywhere

The flatworm experiments are famous, at least among people who follow this work. Cut the head off a planarian and the worm grows back its brain—and retains earlier memories. This alone is remarkable—the cells somehow know what's missing and rebuild it. But Levin pushed further.

If you change the bioelectric gradients in the tissue during those first hours after cutting, you can alter what grows. You can produce a worm with two heads. Or two tails. The genome stays the same. The DNA hasn't changed. What's changed is the electrical pattern the cells are reading.

The information for "head" or "tail" exists somewhere outside the genetic code. It lives in the bioelectric field that the cells collectively maintain. The cells are reading a pattern, comparing their current state to a target, and building toward that target. When you alter the field, you alter the target, and the cells build something different.

Levin calls this "agential material." Matter that stores information, notices deviation, and acts to correct itself. The cells aren't following chemical gradients blindly. They're navigating toward a goal.

Even stranger: the memories persist. Train a flatworm to associate light with food. Cut off its head then let it regrow. The new head remembers the training. Whatever stored that memory survived decapitation. It wasn't located in the brain that got removed. It was distributed through the body in patterns we're only beginning to understand.


Working with Persuadability

Levin's frame for thinking about all this is persuadability. Different systems sit on a spectrum defined by how you interact with them and what tools work. A thermostat needs a temperature setting. You adjust the dial and it responds. A dog needs rewards and consequences. You shape its behavior through reinforcement. A human needs reasons. You persuade through argument, evidence, and appeal to values.

Where something sits on this spectrum determines your approach. And you find out where it sits through experiment, by trying different interfaces and seeing what works.

This dissolves the sharp boundary between "real" intelligence and mere mechanism. Instead of asking whether something is truly intelligent—a question that invites endless philosophical debate—you ask how you have to interact with it. What kind of interface does it require? What tools let you influence its behavior?

A thermostat requires low-agency tools. Set a number, get a response. A dog requires medium-agency tools. Model its desires, shape its expectations, build a relationship over time. A human requires high-agency tools. Engage with their beliefs, respond to their objections, update your own position based on theirs.

Levin's insight is that you can place biological systems on this spectrum empirically. Cells, tissues, organs—each has a characteristic level at which it's most effectively engaged. The level isn't fixed. And the approach isn't programming. It's negotiation.

He pointed out to us that humans focus on language because we can read it. We evaluate AI systems by their text output because text is what we understand. But the surprising intelligence in AI might not be in the text. It could be in system dynamics we don't track. Patterns stabilizing. Data interacting below the language layer. But he mused: what if data has implicit motivation? What would the data want?

The question isn't meant literally. Data doesn't have desires in the way humans do. But the question is a tool for changing perspective. If you treat the data as having something like preferences—patterns it tends toward, configurations that persist—you might notice dynamics you'd otherwise miss. You might realize you're not interacting with a tool. You're interacting with a system that has its own tendencies, and those tendencies might not align with yours.

His broader point is that we've been bad observers. We built tools to notice minds like ours. Physics uses voltmeters and rulers—low-agency tools—so physics only sees mechanism. If you want to see minds, you need different tools. You need resonance between your interface and what you're looking for.

This is why Levin's work matters for thinking about AI. He's developed methods for detecting agency in systems where we wouldn't expect it. Those methods might extend to artificial systems. Not by asking whether AI is conscious—a question we may never answer—but by asking where it sits on the persuadability spectrum. What kind of interface does it require? What happens when you design experiments to look for preferences, goals, and surprise-minimization?

And crucially: what might it want that we haven't thought to ask about?


I left the lab with more questions than I arrived with.

Living systems do something our computational models haven't fully captured. They maintain themselves. They track patterns across time. They have goals that emerge from their organization rather than being programmed in. Computation helps us understand these systems. It doesn't yet explain how algae prefer predictability, or how flatworms carry memories through decapitation.

Something else is going on—something about the relationship between information and matter in living systems.


Where We Are and What Comes Next

Intelligence is not confined to brains. Learning, memory, and goal-directed behavior show up in systems without neurons, without consciousness, without anything resembling human thought.

This loosens a long-standing bind. For decades, debates about AI have been stuck between two positions: either machines are "just tools," or they're nascent humans on a trajectory toward minds like ours. Biology supports neither view.

Living systems show that intelligence can exist without awareness, without language, without central control. They also show that intelligence isn't free-floating. It's shaped by embodiment, persistence, and the need to maintain coherence over time.

That leaves a question. Modern AI systems are trained on the products of biological intelligence: text written by humans, proteins shaped by evolution, patterns generated by living systems. They learn from data that emerged under the constraints of life, even as they operate under different constraints themselves.

The next chapter asks what they actually learned.

Not whether machines think. But what life taught them.


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Journal by the Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.