Ellie Pavlick: The AI Paradigm Shift
A conversation with Ellie Pavlick, Assistant Professor of Computer Science at Brown University
In this conversation, we explore the foundations of artificial intelligence with Ellie Pavlick, Assistant Professor of Computer Science at Brown University, a Research Scientist at Google Deepmind, and Director of ARIA, an NSF-funded institute examining AI's role in mental health support. Ellie's trajectory—from undergraduate degrees in economics and saxophone performance to pioneering research at the intersection of AI and cognitive science—reflects the kind of interdisciplinary thinking increasingly essential for understanding what these systems are and what they mean for us.
Watch/Listen: YouTube, Spotify, Apple
Ellie represents a generation of researchers grappling with what she calls a "paradigm shift" in how we understand both artificial and human intelligence. Her work challenges long-held assumptions in cognitive science while refusing to accept easy answers about what AI systems can or cannot do. As she observes, we're witnessing concepts like "intelligence," "meaning," and "understanding" undergo the kind of radical redefinition that historically accompanies major scientific revolutions—where old terms become relics of earlier theories or get repurposed to mean something fundamentally different.
Key themes we explore:
- The Grounding Question: How Ellie's thinking evolved from believing AI fundamentally lacked meaning without embodied sensory experience to recognizing that grounding itself is a more complex and empirically testable question than either side of the debate typically acknowledges
- Symbols Without Symbolism: Her recent collaborative work with Tom Griffiths, Brenden Lake, and others demonstrating that large language models exhibit capabilities previously thought to require explicit symbolic architectures—challenging decades of cognitive science orthodoxy about human cognition
- The Measurability Problem: Why AI's apparent success on standardized tests reveals more about the inadequacy of our metrics than the adequacy of the systems, and how education, hiring, and relationships have always resisted quantification in ways we conveniently forget when evaluating AI
- Intelligence as Moving Target: Ellie's argument that "intelligence" functions as a placeholder term for "the thing we don't yet understand"—always retreating as scientific progress advances, much like obsolete scientific concepts such as ether
- Computation vs. Algorithm: Breaking apart these previously unified aspects of intelligence to understand what foundation models actually accomplish and where the meaningful differences between human and artificial cognition might lie
- The Value Frontier: Why the aspects of human experience that resist quantification may be definitionally human—not because they're inherently unmeasurable, but because they represent whatever currently sits beyond our measurement capabilities
- Mental Health as Hard Problem: Why her new institute focuses on arguably the most challenging application domain for AI, where getting memory, co-adaptation, transparency, and long-term human impact right isn't optional but essential
Ellie consistently pushes back against premature conclusions—whether it's claims that AI definitively lacks meaning or assertions that passing standardized tests proves human-level capability. Her approach emphasizes asking "are these processes similar or different?" rather than making sweeping judgments about whether systems "really" understand or "truly" have intelligence.
What emerges is a more nuanced picture of the scientific challenge ahead: not racing to declare AI either triumphant or fundamentally limited, but methodically understanding which aspects of human and artificial cognition align, where they diverge, and which differences actually matter for the questions we're asking. As Ellie notes, we're at the "tip of the iceberg" in understanding these systems—we haven't yet pushed them to their breaking point or discovered their full potential.
Her work on ARIA demonstrates this philosophy in practice. Rather than avoiding mental health applications because they're ethically fraught, she's leaning into the difficulty precisely because it forces confrontation with all the hard questions—from how memory works to how repeated human-AI interaction fundamentally changes both parties over time. It's research that refuses to wait a generation to see if we've "screwed up a whole generation."