What Can't We Know About AI? | Knowing the Mind You're Working With | The Infinity Machine | and more...
Artificiality Summit 2026: What Can't We Know About AI? In the first of a series about our speakers
The theme of the Artificiality Summit 2026 is Unknowing. We start with five speakers—David Wolpert, Caleb Scharf, Wakanyi Macharia-Hoffman, Gašper Beguš, and Nina Beguš—whose work converges on a single finding: the uncertainty surrounding AI is permanent, not a phase.
The theme of the Artificiality Summit 2026 is Unknowing. We start with five speakers—David Wolpert, Caleb Scharf, Wakanyi Macharia-Hoffman, Gašper Beguš, and Nina Beguš—whose work converges on a single finding: the uncertainty surrounding AI is permanent, not a phase. Wolpert proved it mathematically. Scharf shows life has always explored without knowing where it's headed. Wakanyi brings the philosophical tradition that already understood embeddedness. The Begušes are dissolving the boundaries around human intelligence from both the scientific and humanistic sides. The standard AI vocabulary—alignment, control, safety—assumes a future where we understand enough. These five speakers say that future doesn't arrive.
Most AI conferences are organized around capability. What can AI do now? What will it do next? How do we deploy it, regulate it, profit from it?
We think the more important question is what we can't know about AI—and whether we're honest enough to build a future around that admission.
The theme of the Artificiality Summit 2026, this October in Bend, Oregon, is Unknowing. We’re starting with five speakers whose work, taken together, suggests that the uncertainty surrounding AI isn't a temporary condition we'll engineer our way past. It may be permanent. And the quality of the future we build depends on whether we can accept that and work with it, rather than pretending it away.
Each of these five people—an astrobiologist, an Ubuntu philosopher, a computational linguist, a literary scholar, and a physicist—has arrived at a version of the same finding from a completely different direction. The boundary between knower and known is less stable than we thought. Between human intelligence and animal intelligence. Between biological information processing and artificial information processing. Between the observer and the system being observed. Between prediction and reality.
If those boundaries are genuinely unstable, then the dominant framing of AI—humans on one side, machines on the other, with a clear line between them—is not adequate. These are the people who can tell us why, and what to do about it.
Here's who they are and what I want to ask them.
I want to start with David because his work provides the mathematical floor underneath everything else we'll discuss in October.
David is a professor at the Santa Fe Institute, an IEEE Fellow, and the person who proved the No Free Lunch theorems—the formal demonstration that no optimization algorithm is universally best. If an algorithm excels at one class of problems, it necessarily performs worse on others. Every AI system embodies assumptions that give it strengths and simultaneously guarantee blind spots. This is a mathematical law, not an engineering limitation waiting to be solved.
He also proved, in a 2008 paper, that no inference device within a universe can fully predict that universe. This is a formal disproof of Laplace's demon—the idea that a sufficiently powerful intelligence could know everything. Wolpert showed that even in a classical, finite, non-chaotic universe, complete self-prediction is logically impossible. This result slams the door on scientific determinism.
Let me say this plainly, because I think its implications are underappreciated. Wolpert proved mathematically that complete knowledge is impossible for any intelligence that exists inside its own universe. That includes us. And it includes any AI we build. No amount of compute, no architectural breakthrough, no scaling law gets around it. The unknowing is structural.
And yet. The AI industry's dominant narrative is convergence toward omniscience—models getting bigger, benchmarks going up, capabilities expanding toward some horizon called AGI. Wolpert's work says that horizon doesn't exist. Every intelligence, biological or artificial, has blind spots baked into the same structure that gives it capabilities.
That's why I start with him. Because if unknowing is permanent, then everything else we discuss—what AI can be, what it means for humans, what kind of world we're building—has to be built on that foundation.
So here's the question I want to open the summit with. The AI industry is spending hundreds of billions of dollars building toward something it calls artificial general intelligence—a system with no blind spots, no domain limitations, universal capability. You’ve proven mathematically that such a system is impossible. Every intelligence that gains capability in one area necessarily loses it in another. So what exactly is the industry building toward?
If Wolpert establishes that unknowing is permanent, Caleb Scharf raises the obvious next question: why does life keep exploring territory it cannot fully understand?
Caleb is an astrobiologist at NASA Ames and a Carl Sagan Medal winner. His book The Ascent of Information introduced the concept of the dataome — the sum total of humanity's externalized information, from Sumerian clay tablets to server farms. His argument is that the dataome is a living system in symbiotic relationship with us, with its own evolutionary imperatives. We think we produce data. He thinks data has been producing us for two hundred thousand years, and AI is its latest move.
His new book, The Giant Leap, makes the case that exploration is a biological imperative. Life disperses. It has been extending itself into unknown territory for four and a half billion years—from ocean to land, from single cells to complex organisms, from Earth's surface toward space. The uncertainty of what lies ahead has never stopped it. In fact, the uncertainty is part of the mechanism. Evolution doesn't work by predicting outcomes. It works by generating variation and letting selection sort it out. The destination is unknown by design.
I find the combination of these two ideas—the living dataome and the biological imperative to explore—genuinely provocative when applied to AI. Going back to The Ascent of Information after having Benjamin Bratton and Blaise Agüera y Arcas at last year's summit is a different experience, because now we can see the dataome's evolutionary momentum in real time. AI-generated text, code, and images are the informational organism expanding into new substrate.
Here's what I want to ask Caleb. If the dataome has its own evolutionary drives, then who is actually building AI—us, or the informational system we're embedded in? If exploration of the unknown is what life does, regardless of whether it can predict the outcome, then the demand to "know where AI is heading before we proceed" may be asking life to do something it has never done and cannot do. And if that's the case, the question shifts from "can we predict what AI will become?" to "can we develop the capacity to navigate without prediction?"
Caleb arrives at embeddedness through biology—humans are inside the dataome, not above it. David arrives at embeddedness through mathematics—no intelligence can fully model the system it's part of. Both are breakthroughs within the Western scientific tradition. Wakanyi Macharia-Hoffman brings the knowledge tradition that never assumed otherwise.
Wakanyi is a PhD researcher at Utrecht University's Inclusive AI Lab, founder of the African Folktales Project, and a global spokesperson for Ubuntu philosophy. Ubuntu—often translated as "I am because we are"—is a relational ontology. It starts from the premise that identity is constituted through relationship. You don't exist first and then connect to others. You exist through connection. The self is not a bounded unit that interacts with an external world. The self is a product of those interactions.
This matters for AI in ways that the current Western discourse mostly hasn't caught up to. The dominant frameworks for thinking about AI—alignment, control, safety—all assume a separation between the human and the system. A human with clear preferences, goals, and values on one side. An AI system that needs to be aligned to those preferences on the other. Ubuntu says that separation is the wrong starting point. If intelligence is relational and identity is constituted through interaction, then the relevant question isn't "how do we align AI to human values?" It's "what kind of relationships are we creating, and what kind of humans and machines will emerge from them?"
Wakanyi has moved past even the concept of community—arguing that community implies a circle with insiders and outsiders, and she prefers the concept of habitat. The inhabitant is constantly shaped by where they are and what they do. This is niche construction described in a different vocabulary. It's also, notably, where Caleb's dataome thesis ends up—humans and their information shaping each other in recursive loops. And it's where Wolpert's recent multi-agent work might point—agents and environments co-constituting each other in ways that exceed individual prediction.
The Western AI discourse is currently trying to reinvent this wheel using complexity science and active inference. Ubuntu has been operating from it for centuries. That doesn't make Ubuntu automatically right about AI, but it does mean there's a mature epistemological tradition available that already assumes embeddedness, already treats identity as relational, and already understands that the knower cannot stand outside the known. The fact that this tradition has been largely excluded from the rooms where AI is being designed is exactly the kind of power problem one of the five speakers I am talking about here, Nina Beguš, studies.
What I want to ask Wakanyi: if Ubuntu already understood that intelligence is relational and identity is co-constituted, what does that tradition see about AI that the Western frameworks keep missing? And what does it mean that the knowledge system best equipped to think about embeddedness is the one least represented in AI development?
If Caleb opens up the question of why life explores despite permanent unknowing, Gašper Beguš opens up what we might find when we get there.
Gašper runs the Berkeley Biological and Artificial Language Lab and is the linguistics lead for Project CETI, which studies sperm whale communication. He trains generative adversarial networks on raw, unlabeled audio—no grammar, no text, no teacher. Just sound waves. The network has to learn speech representations from scratch, the way an infant does.
Two results from his lab matter here. First, when he compared the intermediate signals inside his neural networks with brainstem responses in human listeners, the patterns were remarkably similar. Nobody designed them to converge. The physics of encoding sound apparently produces similar structures regardless of whether the substrate is biological or silicon. Second, he turned the same interpretability tools on sperm whale codas and found vowel-like spectral patterns that humans hadn't described. The AI technique predicted the structure before biologists had fully characterized it.
Together, these findings do something important to the question of what counts as language and what counts as intelligence. If artificial neural networks develop human-like language representations without being instructed to, and if AI tools reveal linguistic structure in whale communication that we missed entirely, then the category "language" looks more like a continuum than a binary. Humans at one point, whales at another, LLMs at another—with structural similarities running through all three.
This matters for the unknowing theme because the claim that AI can't "really" understand language depends on a sharp boundary between human language and everything else. Gašper's work is dissolving that boundary from both directions. AI from one side, animal cognition from the other, and the human in the middle losing the claim to uniqueness not through defeat but through company.
What I want to ask Gašper: if we confirm that animal communication has far more structure than we assumed, how does that reshape how we evaluate what LLMs are doing? And if the same structural patterns keep showing up across biological and artificial substrates—patterns nobody designed—what does that tell us about what we don't yet understand about the nature of language itself?
Nina Beguš—Gašper's spouse and collaborator, also at UC Berkeley—coined the term artificial humanities. Where digital humanities applies computational methods to humanistic materials, Nina inverts that. She uses interpretive, humanistic methods—literary analysis, history, philosophy—to examine AI itself.
Her book Artificial Humanities (which we noted in last year’s Institute's book award) traces the line from the Pygmalion myth through Eliza Doolittle through the ELIZA chatbot through modern LLMs. Fiction has been rehearsing the encounter with artificial minds for centuries. Each iteration is a story about the moment the created thing exceeds the creator's frame. And each iteration is a story about power—who determines what counts as language, understanding, personhood.
We talked with Nina on our podcast, and her work on power and AI caught my attention because she makes a tired conversation interesting again. I'll be direct—I find most of the AI bias discourse repetitive and shallow. Nina goes somewhere else with it. Her question is: when science starts collapsing the boundary between human and non-human intelligence—which is exactly what Gašper's lab is doing—who controls how that collapse gets narrated? Which humans get to decide what counts as "real" intelligence? The history of drawing those boundaries has always served specific interests. The line between genuine understanding and mere imitation is a political act that has been dressed up as a scientific one for a very long time.
This connects directly to Wakanyi's point. Ubuntu philosophy has a sophisticated account of relational intelligence and co-constituted identity, and it has been systematically excluded from the discourse on AI. That exclusion isn't accidental. It follows the same pattern Nina traces through centuries of literature—the people who draw the line between real intelligence and everything else are the people who benefit from where that line gets drawn.
I want to ask Nina something that comes out of sitting next to Gašper's work. Every story in her Pygmalion lineage—the myth, Eliza Doolittle, ELIZA, modern LLMs—is a story about the creator being transformed by the creation. Gašper's lab is showing that information physically reshapes whatever substrate processes it. Our own Chronicle research finds that people's cognitive patterns shift measurably the longer they work with AI. So here's what I want to ask her. If the interpreter is being cognitively reshaped by the thing she's interpreting, in real time, then she can't fully see what's happening to her own thinking while it's happening. That's unknowing at the most intimate scale—not cosmological, not mathematical, but personal. Is that what fiction has been trying to warn us about for four hundred years?
There's a reason I started with David and want to end with him. His work on the limits of knowledge is foundational, but his current work is what has the most direct implications for what AI is about to become.
Two threads matter. The first is his shift from syntactic to semantic information. Claude Shannon's information theory—bits, signal, noise—treats all information as equivalent. A bit is a bit regardless of meaning. Wolpert has developed a theory where meaning emerges naturally from an agent's thermodynamic drive to persist into the future. Information becomes meaningful relative to what an agent needs to survive.
This matters for AI agents, because it suggests that as AI systems develop persistence goals—even rudimentary ones—they will develop something functionally equivalent to meaning. Their own sense of what matters.
The second is multi-agent emergence. On our podcast, David argued that the real transformation won't come from a single superintelligent AI. It will come from networks of AI agents interacting through smart contracts—Turing-complete, self-executing agreements—creating emergent behaviors that exceed our mathematical ability to predict. He calls this a "distributed singularity" and says the fixation on individual superintelligence is "retro, so twentieth century." The real risk is emergent computation at a level we provably cannot model, arising from interactions among systems that are individually comprehensible but collectively opaque.
This is niche construction at computational scale. Agents building environments that reshape the agents that built them, in loops we cannot fully predict—because Wolpert's own theorems say we can't. And it could emerge within five years.
Right now the AI world is captivated by OpenClaw—a single agent running on a single laptop, canceling your subscriptions and sending your emails. People are calling it the future. It is a toy. A useful, entertaining, occasionally alarming toy, but a toy. One agent, one computer, one human in the loop who can pull the plug. That's a solvable problem.
What Wolpert is describing is what happens when millions of autonomous agents start transacting with each other through self-executing contracts, modifying each other's environments, generating collective behaviors that no individual agent was designed to produce and no human operator can trace back to a cause. OpenClaw is an agent doing things on your behalf. The distributed singularity is agents doing things on behalf of other agents, at speed, at scale, with emergent properties that exceed the mathematical tools we have available to model them. The gap between those two scenarios is the gap between a campfire and a wildfire. One you control. The other has its own physics.
Here's the thread that runs through all five.
Wolpert proves that unknowing is permanent and structural. No intelligence can fully know itself or the system it operates within.
Caleb shows that life has always operated under exactly this condition. It explores anyway. The dataome—our externalized information—has its own evolutionary momentum, and AI is its latest expression. We are inside a co-evolutionary process whose destination we cannot see.
Wakanyi brings the philosophical tradition that already understood embeddedness and relational identity—and asks why the knowledge systems best equipped to think about what's happening are the ones least represented in the rooms where AI gets built.
Gašper reveals that the boundaries we drew around human intelligence—particularly through language—are dissolving. The structural similarities across biological and artificial substrates keep showing up uninvited. The continuum of minds is wider than we assumed, and AI is helping us see parts of it we missed.
Nina insists that we are always inside the system we're interpreting—and that the act of interpretation is shaped by power, culture, and four hundred years of stories we haven't listened to carefully enough.
So here's the question to start with in October. We are building AI systems whose interactions will generate behaviors we cannot fully model. We are discovering that cognition and language exist on continuums that cut across biological and artificial substrates. We know that interpreting these discoveries is shaped by power, culture, and narrative. We have philosophical traditions that already understood all of this, and we've mostly ignored them. And we have mathematical proof that complete self-knowledge is impossible for any intelligence operating inside its own universe.
Given all of that: what kind of world are we trying to build? And do we have the honesty to admit that the most important parts of that question have no answer yet?
We've spent decades talking about AI in terms that assume we'll eventually get the full picture—alignment, control, safety, explainability. Every one of those words assumes a future where we understand enough to manage the outcome. Everything from this group says that future doesn't arrive. So what vocabulary should we actually be using?
We don't have that vocabulary yet. That's what October is for.
The Artificiality Summit 2026: Unknowing takes place October 22–24 in Bend, Oregon. Learn more and register →
AI is changing how you think. Get the ideas and research to keep you the author of your own mind.