Stay Human Chapter 2 | Steve Sloman
Stay Human: Chapter 2 Helen just published Chapter 2 of Stay Human: "The Right to a Future Tense."
It's only two months until...
The Artificiality Summit 2025!
Join us to imagine a meaningful life with synthetic intelligence—for me, we, and us. In this time of mass confusion, over/under hype, and polarizing optimism/pessimism, the Artificiality Summit will be a place to gather, consider, dream, and design a pro-human future.
And don't just join us. Join our spectacular line-up of speakers, catalysts, performers, and firebrands: Blaise Agüera y Arcas (Google), Benjamin Bratton (UCSD, Antikythera/Berggruen), Adam Cutler (IBM), Alan Eyzaguirre (Mari-OS), Jonathan Feinstein (Yale University), Jenna Fizel (IDEO), John C. Havens (IEEE), Jamer Hunt (Parsons School of Design), Maggie Jackson (author), Michael Levin (Tufts University, remote), Josh Lovejoy (Amazon), Sir Geoff Mulgan (University College London), John Pasmore (Latimer.ai), Ellie Pavlick (Brown University & Google Deepmind), Tess Posner (AI4ALL), Charan Ranganath (University of California at Davis), Tobias Rees (limn), Beth Rudden (Bast AI), Eric Schwitzgebel (University of California at Riverside), and Aekta Shah (Salesforce).
Space is limited—so don't delay!
Three economists we've long admired—Ajay Agrawal, Joshua Gans, and Avi Goldfarb—have given economic form to Steve Jobs' "bicycle for the mind" metaphor, showing how AI changes not just what we can do, but how expertise itself is valued. Their latest work reveals that judgment divides into two kinds: opportunity judgment (seeing where improvement is possible) and payoff judgment (deciding what's worth pursuing once options are on the table).
This distinction matters because AI excels at the first but struggles with the second. AlphaFold predicts millions of protein structures, but humans must decide which few are worth synthesizing in labs. Drug discovery systems propose thousands of molecules in hours, but the constraint becomes choosing which merit clinical trials with limited budgets and regulatory pathways.
AI overproduces opportunity. Humans carry the burden of payoff.
When we read their work alongside our research on lived experience, the picture becomes richer. The economists model these as economic categories, but we observe them as psychological orientations people inhabit when working with AI. Cognitive Permeability shapes opportunity judgment—whether professionals let AI suggestions seep into their reasoning or filter tightly through their own instincts. Identity Coupling emerges most visibly in payoff judgment—can I stand behind this decision as mine? Symbolic Plasticity makes payoff judgment meaningful by reframing outputs into significance for specific contexts and communities.
The deeper insight is organizational. When implementation becomes cheap through AI, productivity gains depend on how judgment is distributed. Many firms channel every AI-generated option upward to senior executives, creating paralysis despite abundance. But organizations that shift decision rights closer to teams—where context, meaning, and accountability align—unlock AI's value through situated judgment rather than drowning in noise.
Perhaps the real premium in an AI-abundant world lies not in weighing options, but in the deeper human capacity to decide what should matter at all. This is where judgment becomes inseparable from wisdom, and where human-AI partnership finds its most essential boundary.
Upcoming community opportunities to engage with the human experience of AI.
Foundational explorations from our research into life with synthetic intelligence.
AI is changing how you think. Get the ideas and research to keep you the author of your own mind.