Stay Human Chapter 2 | Steve Sloman
Stay Human: Chapter 2 Helen just published Chapter 2 of Stay Human: "The Right to a Future Tense."
Helen just published Chapter 2 of Stay Human: "The Right to a Future Tense." The chapter borrows a framework from EU digital rights work—the idea that predictive technologies shouldn't narrow your choices, that you remain the author of who you become. Helen takes that and asks what it actually means when the technology isn't just showing you content but thinking alongside you. She also writes about our kids—continuing the personal journey in this book. There's a lot more of that to come!
Join our (free) community on Circle to join the conversation as each chapter is published.
Read it here: The Right to a Future Tense
Don't miss the Artificiality Summit 2026!
October 22-24, 2026 in Bend, Oregon
Our theme will be Unknowing. Why? For centuries, humans believed we were the only species with reason, agency, self-improvement. Then came AI. We are no longer the only system that learns, adapts, or acts with agency. And when the boundary of intelligence moves, the boundary of humanity moves with it.
Something is happening to our thinking, our being, our becoming. If AI changes how we think, and how we think shapes who we become, then how might AI change what it means to be human?
Unknowing is how we stay conscious and make space for emergence.
Becoming is what happens when we do.
For the third time, we welcome Steve Sloman to the podcast—this time to talk about his new book, The Cost of Conviction. Steve's work challenges the dominant assumption in decision research that people primarily act as consequentialists, calculating costs and benefits to maximize utility. Instead, he reveals how many of our most important decisions bypass consequence entirely, guided by sacred values—rules about appropriate action handed down through families and communities that define who we are and signal membership in our social groups.
Steve's work has been important to us since the beginning of our Artificiality journey and we are privileged to have him as an advisor.
And check out our previous conversations with Steve and his lecture at the 2024 Summit:
Subscribe on your favorite podcast platform to catch our upcoming episodes with Blaise Agüera y Arcas, Christopher Summerfield, and Nina Beguš.
Yes, the Artificiality Institute is now recognized as tax-exempt under Section 501(c)(3) of the Internal Revenue Code. And, yes, that means that contributions are tax-deductible to the extent permitted by law.
Please consider supporting our work. Our research, publishing, and community are dependent on donor support.
You can learn more and donate—however small or large—here.
We are building our speaking & events offerings to help spread the word and to support the rest of our work—and we could use your support and introductions.
As you know, for the past decade, we've researched what happens to people when AI enters their work—not the technology side, but the human side. How it changes thinking, identity, judgment. What we see right now: people are working with AI and figuring out the boundaries on their own—when to use it, when not to, what to tell colleagues, what to keep hidden. Meanwhile, leadership is making decisions without real visibility into what's actually happening.
We've turned this into three kinds of engagements:
We're actively growing this work. If anyone comes to mind—someone planning a conference, leading an organization through AI transition, or trying to understand what's really shifting in their culture—we would be grateful for an introduction.
Learn more here.
Our video production is up, especially short videos. Follow us on your favorite channels—and please like, share, and repost to help us spread the word.
AI is changing how you think. Get the ideas and research to keep you the author of your own mind.