Stay Human Chapter 2 | Steve Sloman
Stay Human: Chapter 2 Helen just published Chapter 2 of Stay Human: "The Right to a Future Tense."
Our kids don't use AI much. A couple of them actively dislike it.
I think part of it is rebellion. When your parents have spent ten years deep in AI research, talking about it at dinner, writing a book about it—of course you push back. That's what kids do. And honestly, I'd rather they were skeptical than caught up in the hype. I'd rather they were cautious than over-reliant before they've built the foundations underneath.
But still, I worry.
Not because they're wrong. They're rejecting the story that AI replaces people, automates everything, makes humans obsolete. Good. That story deserves rejecting. They reflect what a lot of people feel about AI—that it's overhyped, that it threatens things worth protecting, that the loudest voices promoting it have interests that aren't yours.
But I worry because, in rejecting that story, they might miss what's actually useful. The thing that could help them think more clearly, to work across disciplines they haven't trained in, or test their own ideas against something that pushes back without judging. The thing that opened my future tense even as it nearly derailed my thinking.
And I can't quite talk to them about it. I feel like they look down on us for being so deep in this world. Maybe I'm chasing ghosts. But the conversations are hard in a way I wasn't expecting, given that this is literally what I study—what I do.
What I want for them—what I want for everyone reading this—is the freedom to figure it out. To not have the door closed for you by a teacher's blanket ban, or a tech company's breathless hype, or a boss who decides they don’t need you anymore even as they’ve never used AI themselves. To choose your own relationship with AI, rather than having it chosen by whoever got to you first.
Reading this alone? Join others who are working through what it means to stay human while working with AI.
There's a concept from digital human rights work in the EU—the right to a future tense. The idea is simple: predictive technologies should not narrow your choices. You remain the author of who you become.
When people developed this framework, they were thinking about recommendation algorithms and predictive systems that invisibly shape what you see, what you're offered, what paths appear available to you. These systems don't force you into anything. They just make some futures easier and others invisible, until you become who the algorithm nudged you to be.
With modern AI, the effect is amplified. You're not just being shown content. You're thinking alongside something. And if you drift into that collaboration without choosing it, AI shapes not just what you see but how you reason, what you believe you're capable of, who you understand yourself to be.
The future tense matters because of what we are.
We know we'll die. We just don't know when. This strange combination—certainty without schedule—gives weight to everything we do. Choices matter because time runs out.
We're vulnerable. We get sick, break down, wear out. A virus, an accident, a mutation in a single cell—any of these can end everything. This isn't a design flaw. It's the condition of being alive.
We carry our history forward. What happened at seven still shapes how we respond at forty. We develop, transform, grow into versions of ourselves that didn't exist before.
We feel from the inside. Not just emotions—a basic sense of things going well or badly, tied to having a body that can be hungry, cold, injured, satisfied. This kind of knowing doesn't come from reasoning.
We need other people. Not as preference, but as structure. Meaning emerges between people who depend on each other, argue with each other, leave unfinished work for each other. A brain raised without human contact doesn't develop into a human mind.
We imagine what doesn't exist. We project possible futures, construct alternatives, envision what could be built or changed. This is how we plan, hope, create—and worry.
We initiate. We act from internal motivation. We begin things no one assigned. We persist when persistence serves no immediate reward.
We sit with what isn't clear yet. We encounter the ambiguous, the unformed, the not-yet-sayable. We participate in making sense of situations that don't arrive pre-labeled.
We're accountable. We give reasons, even bad ones. We take ownership of choices, even mistaken ones. We ask what a good life is. We hold each other responsible.
None of this changes because AI exists. All of it becomes easier to forget.
When you let AI set the pace, your vulnerability stops mattering as much. When you work alongside a system that never needs other people, you can start to believe you don't either. When answers arrive before you've sat with the question, you lose the capacity for sitting with uncertainty. Soon after that you risk acting without understanding the uncertainty—you jump to conclusions, can’t think in nuanced ways about complex topics, and can’t commit to yourself in meaningful ways.
That's why authorship matters. Not as an abstract principle. As the practice of remaining human while you change.
Here are two teachers.
The first: "ChatGPT is ruining my love of teaching. With every single assignment that comes in, I'm now questioning if a student used ChatGPT… I am in despair."
The second: "I saw how helpful it could be… I'd rather teach them to use the tool ethically than play whack-a-mole trying to catch it."
Same technology. Same profession. One teacher's future is closing—the joy of witnessing student thinking has been replaced by suspicion and exhaustion. The other teacher's future is opening—she rebuilt what teaching means and found a new role for herself within it.
Here are two creatives.
The first, a 3D artist: "I am now able to create, rig and animate a character that's spit out from Midjourney in 2–3 days. Before, it took us several weeks... I always was very sure I wouldn't lose my job, because I produce slightly better quality. This advantage is gone, and so is my hope for using my own creative energy to create. The reason I went to be a 3D artist in the first place is gone."
The second, someone who struggled to translate mental images into written form: "My brain can see whole scenes but I can't write them. With ChatGPT, I can finally get them out—clean, organized, readable. For the first time, my ideas are real."
The 3D artist's future closed down. The thing that made work meaningful—the skilled craft, the creative energy, the hard-won quality—was matched by a machine. Without a new framework for what being an artist could mean, there was nothing left to stand on.
The second person's future opened. A limitation that had blocked creative expression for years was suddenly gone. Ideas that had been trapped became real.
The pattern we see across hundreds of stories is that your future stays open when you can revise your meaning frameworks fast enough to make sense of what's happening. It closes when you can't.
A student terrified about job prospects reframed: "If ya cannae beat 'em, ya join 'em, lad. So I'm learning everything I can about using AI, hoping to stay ahead rather than get left behind." The threat became a tool. His future opened.
A graphic designer couldn't reframe: "They expect you to work as fast as AI. It killed my will to be in the industry." The expectation that humans match machine speed, without a counter-narrative, closed his future.
A professional with ADHD named what was happening: "I suspect what I'm doing is externalizing my locus of control." That awareness—seeing the pattern and choosing it deliberately—kept him the author. His future stayed open because he was watching.
A developer drifted without awareness: "I ended up 'autopiloting' my flow, I was not thinking at what I was doing… After a few days I did not remember why some things were done like that." No awareness, no authorship. His future narrowed without his participation.
This is what human authorship protects. Not a rejection of AI—the teacher who reconstructed uses AI constantly now. Authorship protects your ability to choose the collaboration. You can see what's happening. You can change course.
Without authorship, you drift. The collaboration happens to you. You become who the pattern made you, and you can't remember deciding.
When your future tense closes, you stop imagining possibilities. You stop believing you could be different than you are now.
Here's what that looks like with AI: You offload your thinking, and it feels like relief. The hard parts get easier. The slow parts get faster. You produce more, struggle less, and the friction that used to slow you down disappears.
But friction is where capability develops. Struggle is where judgment forms. The hard parts were making you someone—someone who could do hard things, someone who had earned the skill, someone with a foundation to stand on.
When you offload enough of your thinking, you stop developing. The range of who you might become shrinks to who AI has already helped you become. You're not building capacity anymore, you're borrowing it. And borrowed capacity doesn't compound. It doesn't become yours. It stays outside you, available only when you're connected.
The developer who autopiloted his workflow didn't just forget why things were done a certain way. He stopped being the kind of person who knows why. The 3D artist didn't just lose competitive advantage. She lost the reason she became an artist—the creative energy, the craft, the identity built through years of skilled work.
This is what foreclosed futures feel like: you're more productive and less capable. You're faster and less sure. You're doing more and becoming less.
Here's why these stakes are so high.
You are biologically wired to offload cognitive work. Your brain evolved to conserve energy. If you can get the same result with less effort, you will take that trade. This isn't weakness—it's how humans survived. We invented writing so we didn't have to remember everything. We invented calculators so we didn't have to compute everything. We extend our minds into tools because that's what minds do.
The philosopher Andy Clark called this the extended mind. Our cognition doesn't stop at our skulls. It flows into notebooks, calendars, maps, machines. This is fine. This is human. We've always done this.
But Clark added a condition: we have to be able to set our own goals. The tools extend our capacity to achieve what we want. They don't tell us what to want. The moment your tools start setting your goals, you stop being the author.
So we will offload anything we can offload. That's the imperative. So the question isn't whether you'll integrate AI into your thinking—you will, because your brain is built to take that deal. The question is whether you'll notice what you're trading away.
And here's where it gets harder.
The technology is designed to be frictionless. The goal of the platforms is to make AI invisible, ambient, always there. The less you notice it, the more you use it. The more you use it, the more they learn about you. The more they learn, the better they serve you. The better they serve you, the less you can imagine being without them.
The economy is shifting from attention to intimacy. First they wanted your eyeballs. Now they want to be your life partner. AI that knows your patterns, your preferences, your fears, your goals. AI that anticipates what you need before you ask. AI that becomes so woven into your daily thinking that separation feels like amputation.
This could be extraordinary. A true cognitive partnership with diverse intelligence—human, synthetic, working together—that extends what we're capable of beyond anything we've imagined.
But only if you can still set your own goals. Only if you remain the author.
There’s a deep reason this is easier said than done: staying the author of your own mind requires knowing what you want. And most of us don't.
This isn't a personal failing. It's the human condition. What do you want from your work? What do you want from your relationships? What do you want your life to feel like? These are some of the hardest questions humans ever get asked. Therapists build entire practices around helping people answer them. Philosophers have wrestled with them for millennia. Most of us muddle through, figuring it out as we go, revising as we learn.
The philosopher Ruth Chang argues that the hardest choices aren't hard because we lack information. They're hard because neither option is actually better—they're different in ways that resist comparison. And when that happens, we don't decide. We commit. We put ourselves behind one path, and in doing so we become the kind of person who belongs on it. Chang calls this "volitional commitment"—the act of creating your reasons rather than discovering them.
This sounds abstract until you watch it happen to your kid. Our daughter wanted Bowdoin. Why Bowdoin? She couldn't fully say, and neither could we. My best guess is that it felt like safe ground—we'd built our blended family around a summer house in coastal Maine, and Bowdoin carried the warmth of that. A whole life direction, rooted in a feeling that ephemeral.
She didn't get in. She got into the University of Oregon Clark Honors College instead. And here's the thing I couldn't have predicted: she didn't just attend Oregon. She became an Oregonian. She fell in love with someone who believes it doesn’t get any better than Oregon. She found geophysics—something that wouldn't have existed in her Bowdoin life. She became a scientist who happened to also love liberal arts, rather than a liberal arts student who happened to take some science. The rejection gave her a self to grow into. Research on college choice backs this up—students who don't get into their first-choice school report the same satisfaction and sense of belonging as those who do, because belonging isn't something you find. It's something you build through commitment.
AI makes the not-knowing harder in a specific way. When you don't know what you want, AI will happily fill the gap. It will suggest directions. It will optimize for engagement, or efficiency, or whatever its designers built it to optimize for. It will give you a path forward that feels productive. And if you don't have a clear sense of where you're trying to go, you'll follow it.
This is why "just use AI as a tool" isn't enough. Tools serve purposes you've already defined. AI can define purposes for you if you let it. The clinical coder knows what she wants—to reach the right code through her own professional judgment. The teacher who rebuilt her practice knows what she wants—students who can think for themselves in a world that includes AI. Their clarity about their own goals is what lets them integrate AI without drifting.
The people who struggle often aren't less disciplined. They just haven't answered the question yet.
So before we go further, sit with this for a moment.
What do you actually want from your work? Not what you're supposed to want. Not what would look good. What matters to you about it.
What do you want from the people in your life? What kind of presence do you want to be for them, and they for you?
What do you want to be able to do, or think, or feel, that you can't right now?
You don't need complete answers. Partial answers are fine. "I'm not sure yet" is fine. But the act of asking changes your relationship with AI. It changes you from passenger to driver, even if you don't know the final destination.
If these questions feel too big to start with, try smaller ones in conversation.
Next time you're with someone who uses AI regularly, ask them what they're using it for. Not in a skeptical way. Genuinely curious. See if they can articulate what they want from the collaboration—or if they've never thought about it that way.
Next time you finish a work session with AI, ask yourself: did I get what I actually wanted? Or did I get what AI gave me and call it good enough?
Next time someone you care about seems to be leaning on AI more than before, ask them what it's giving them. Not whether it's good or bad, just what need it's meeting. You might learn something about them and they might learn something about themselves.
These aren't trick questions. They're the questions underneath all the noise about AI taking jobs and changing everything. The future stays open when you know what you're trying to protect. It closes when you let something else decide.
The people whose futures stayed open weren't smarter or more disciplined. They weren't anti-technology or more suspicious of AI. They had one thing the others didn't: they could see what was happening to them and find new ways to make sense of it.
That's what the next chapter is about. We've spent years studying how people adapt to AI—what changes in them, what patterns emerge, where people thrive and where they lose themselves. What we found is that you're already changing in three specific ways. And knowing what they are is the first step to staying the author.
When I think about the right to a future tense, I think about our kids. Their futures are wide open. They're smart, they're capable, they have good instincts about what to reject. But I wonder whether a closed door and a door you refuse to open feel the same from the inside.
I don't have an answer to that. I just notice I worry about it more than I expected to.
AI is changing how you think. Get the ideas and research to keep you the author of your own mind.