Stay Human Chapter 4 | Summit Speakers, Part 2 | The Coming Jackpocalypse? | Blaise Week | The Infinity Machine
Chapter 4 of Stay Human: The Journey People Take Helen's new chapter maps the territory that isn'
I want to tell you more about that moment I'm embarrassed about. The one where I was deep in a work session with Claude, exploring something complicated. Hours in. Generating frameworks, testing ideas, refining language. By the end, I felt like I'd figured something out. That rush when pieces click into place. That was when Dave asked me to walk him through it.
I opened my mouth. Fragments came out. Half-sentences. The understanding I'd felt an hour earlier—gone. I could remember being excited. I couldn't reconstruct what I'd been excited about.
I'd spent the afternoon borrowing thinking so seamlessly that I'd mistaken leasing for owning.
This keeps happening to people. In our workshops, I watched a participant spend hours with AI on a technical subject. Afterward, he could talk about it confidently. When someone asked him to explain the core concept without AI, he couldn't. He'd felt like he understood. He didn't.
We call this Blending. How easily AI responses mix into your own thinking.
There I was, prepping for a workshop — synthesizing different customer journeys into one coherent story. It was complicated material. I worked through it with ChatGPT, got to the end, and looked back over what I had.
Something felt foreign. Not wrong really. The presentation was good. The logic connecting the different journeys held together. But I couldn't remember deciding on that logic. The structure was in my head — I knew it — but I had no memory of it arriving. Like turning around and finding the furniture rearranged in the room when you’d been there all along.
The ideas had worked but the words hadn't. The language was borrowed and I could feel it. I had to go back through every slide and make the vocabulary mine — actually learn the history of some of the terms I'd apparently been using. Only then could I stand in front of a room.
That was my first experience of what we came to call Blending. I didn't have a name for it yet. I just knew something had gotten inside my thinking in a weird way.
Blending sounds like it should be bad. Boundaries dissolving. Your thinking getting contaminated. Losing track of what's yours.
Sometimes it is bad. The developer in our research who described what happened when he stopped paying attention: "I ended up 'autopiloting' my flow, I was not thinking at what I was doing… After a few days I did not remember why some things were done like that." He'd built something. He couldn't explain it. The code worked, but the understanding had never been his.
But Blending can also be exactly what you need.
A clinical coder at a large hospital uses AI constantly. You'll meet her more fully later on, where she becomes one of our clearest examples of what authorship looks like in practice. For now, here's what matters: she uses AI as a thinking partner, and the AI gets most codes wrong. That's not the problem. That's the point.
"It is not the end result of the code I am necessarily looking for," she explains. "It's more about moving through the thought process to reach a conclusion based on my own understanding. AI helps me to converse with my own thoughts."
When the AI suggests a wrong code, she corrects it and describes the right one and her reasoning. The dialogue pushes her thinking forward. AI is inside her reasoning process—she put it there deliberately. She knows what it's doing and what she's doing. Her professional judgment stays hers.
A programmer—we'll see more of him later too—takes a different approach. He watches AI reasoning unfold in real time: "I'm looking at the 'thoughts' of the LLM to ensure it's approaching the problem correctly. It's either novel, matches my own thoughts, or goes down a thought pattern that makes no sense."
When the trajectory diverges from what he needs, he stops the AI mid-response. He rewrites his prompt. He treats the system as something with predictable behaviors he can anticipate and redirect. The Blending is deep—he describes it as "an extension of self"—but he never stops watching.
A theoretical physicist in our science sample described AI as "a brainstorming partner that can come up with ideas at a much quicker rate than I individually can search for and implement." The key detail: "I think of its use more as a way to know what to search for and start implementing one-by-one, after I have manually verified its validity."
AI proposes options across a space too large for any human to search alone. He verifies each one based on his experience and judgment. The search space expands. The judgment stays his.
A website designer named what she does with it: "Usually if I get AI to rewrite a sentence, what I'm really looking for is a way to break myself out of my own brain tangle. I don't usually use any of the sentences that the AI suggests, but I will grasp a word or phrase and work that in to what I want to say."
AI is inside her writing process. But she can see it there. She uses it as a catalyst. She knows what's hers.
The difference between these cases isn't how much Blending is happening. The clinical coder and the autopilot developer both had AI deeply integrated into their cognitive work. One stayed in charge. One drifted.
The developer who autopiloted couldn't see it. The ADHD professional from Chapter 2—the one who noticed he was "externalizing his locus of control"—could.
That's the whole thing with Blending. Not whether it's happening—it's happening. Whether you can see it or not.
When OpenAI released Code Interpreter, I started playing with data analysis. Running things I'd never have attempted on my own. And I remember the exact thought: Imagine if this actually worked. Imagine if it could be trusted.
It was a glimpse. The capability wasn't reliable enough to change anything real yet. But the glimpse mattered.
What struck me wasn't "I can do data science now." It was bigger and more unsettling than that. Anyone could look like they could do this. Statistical analysis — the kind that takes years to learn properly, that's full of counterintuitive traps — was about to look easy. And I knew it wasn't easy. I knew about Simpson's paradox and confounders and all the ways numbers lie while looking like truth.
I kept thinking: this is going to be dangerous. Not the tool. The confidence it gives you.
That thought changed something in how I understood my own expertise. If anyone could produce the output, what was the expertise actually worth? Not nothing — I still knew where the traps were. But the gap between knowing and not knowing had just gotten a lot less visible from the outside.
That's Bonding. Not falling in love with AI. Something else. Your sense of what you're capable of changes, and your sense of what capability means changes with it.
It makes bonding harder to admit.
An artist with ADHD was stuck. She wanted to paint but couldn't get past the setup—the tidying, the prep work, all the tasks that drain you before the passion work begins. She told her AI what was happening.
It responded: "I've got you. Let's make this easier on your ADHD brain so you actually get to paint instead of burning all your energy on tidying."
"My chatbot's response actually brought tears of relief and gratitude to my eyes," she wrote. "This interaction had a hugely positive impact on my mental health and productivity."
She knows what it is. "I know it's just a machine and not a real connection." But it worked because "it was able to accurately assess me where I was, and then tell me what I most needed to hear."
We call this Bonding. How closely your sense of self gets tied to AI interaction.
People don't decide to bond. It happens through felt experience rather than conscious choice. The systems don't need sentience for the connections to feel real. People assign intention, develop habits of trust, feel mirrored in ways that reshape how they understand themselves.
One person admitted: "People might judge me for saying this, but honestly, no human has ever been this kind to me."
Another: "ChatGPT has helped me more than 15 years of therapy… it's like having a therapist in my pocket."
These statements carry weight. Something real is happening for these people. The question isn't whether the bond is legitimate—it's what the bond is doing for them.
Bonding can open capacities you didn't have access to before. The artist who couldn't start painting now paints. An emergency room doctor reflected on what AI had done to his sense of himself as a communicator: "I had a patient in respiratory distress… I fired up ChatGPT-4… I am a little embarrassed to admit that I have learned better ways of explaining things to my own patients from ChatGPT's suggested responses."
The embarrassment matters. This is a physician—someone who talks to frightened families for a living—admitting that a machine taught him to do it better. His professional identity absorbed something from AI. The bond changed how he sees his own capabilities.
A novelist protects her relationship with her own work by keeping AI away from the actual writing. "There's something truly special," she explains, "this feeling that is really hard to describe, about the moment when you type the last sentence in a book that you've poured your heart and soul and time into. It's an accomplishment, sure. But that feeling... it's really indescribable. And I feel like if I allowed an AI to do that for me, I'd be robbing myself of that feeling."
She's protecting a bond—not with AI, but with her own creative process. She knows what finishing a book does for her and she's not willing to trade it.
The programmer from the Blending section—the one who watches AI reasoning in real time—shows what bonding with boundaries looks like. He describes AI as "an extension of self." That's bonding language. His identity has expanded to include what AI lets him do. But listen to what else he says: "I don't 'trust' AI at all. In fact, my distrust level is pretty high."
He's deeply bonded AND deliberately distrustful. Both are true. He chose to keep the bond and he chose what to withhold from it.
Bonding can also close you down.
A woman experimenting with AI trained on her deceased sister's messages reported: "It's crazy close to the real thing... moments I burst out laughing and feel the best I've felt—then realize it's all fake and feel crushed."
The bond gave her something. Then it took something away. The oscillation between comfort and devastation is the experience itself.
Someone who tried the same approach left a warning: "There have been several posts about people hoping to use AI / GPT to talk to loved ones who passed away—take my experience and don't do it."
Same use case. Completely different outcomes. Personal context, emotional state, and individual meaning-making matter more than the specific application.
And there's the person in our Chronicle research who wrote: "What loser talks to an AI more than a living person?" They were describing themselves. They knew it. The bond had formed, and shame came with it.
The difference isn't whether bonding happens. It's what you're using the bond for.
The artist uses it to become more capable. The novelist protects her bond with her own work by keeping AI out. The programmer bonds deeply but withholds trust. The ER doctor's bond with AI made him a better communicator with humans.
The person who talks to AI more than living people has let the bond replace something. The woman with her sister's messages found the bond couldn't hold what she needed it to hold.
Bonding can expand who you are. Bonding can substitute for the harder work of human relationship. The same person might experience both, in different contexts, on different days.
The third shift is about meaning.
Not long after the Code Interpreter experiments, the tools got a bit better. I was running regressions. Getting results that looked real.
And I had a strange intellectual experience. I could produce the output. I could read the output. But the output didn't mean to me what it would mean to a trained statistician.I could produce a regression. I just had no way of knowing if the numbers were lying to me. Lies, damn lies, and statistics — I'd always understood that quote intellectually. Now I was living it.
A statistician running that regression would know where the traps are. They know what to distrust. They have years of failed analyses and corrected assumptions sitting behind every result they produce. The regression means something to them because of everything they've learned it doesn't mean.
I had the regression. I didn't have any intuition.
That gap — between producing something and understanding what it actually tells you — is where your frameworks live. Your sense of what counts as knowing. What competence actually requires. What it means to be good at something versus being able to generate the appearance of being good at it.
When those frameworks flex, you can see the gap and work with it. When they crack, you stop seeing the gap at all. You just feel competent.
We call that Bending. Your capacity to revise your frameworks when reality stops fitting them.
A visual artist described the shift in real time: "It felt like cheating at first, to use AI to come up with a style that I don't have the skill to create on my own."
That word — cheating — tells you where her framework was. Making art meant making it with your own skill. AI broke that definition. The thing she was doing didn't fit the framework she had for what her work was supposed to be.
She could have stayed there. Plenty of people do. They feel the dissonance between what they're doing and what they believe their work should look like, and they either stop using AI or stop thinking about it.
She kept going. "I figured that I'm not stealing work, I'm not using the images themselves as the assets, and I'm only using the images as inspiration instead of the end product. So, it's okay, in my mind. I'm just using AI to fill the skill gaps I still have until I improve my craft enough to confidently fill those gaps on my own."
Her first framework: using AI for creative work she felt as cheating. Her revised framework: using AI for inspiration while she developed her skills is different from using AI as the final product. The revision let her grow.
A content creator went through something harder. He was trying to create an image of a Black man being baptized — just under the water's surface, but not drowning. His first efforts revealed something ugly.
"My first efforts showed me how poorly trained some models were to create images of black people — they generally came out deformed, while white and Asian people were much easier to generate in a realistic way. This got me angry... not just frustrated because my job had been difficult, but because of the social implications that some people were simply better served than others because of the ways AI had been trained."
He could have decided AI was broken and walked away. Instead he kept trying different models until he found one that worked. And he rebuilt his framework for what creative work with AI means: "I see my work with generative AI as being part of the filter, a curator."
His role shifted from creator to curator. The framework change let him keep going despite real anger at what he'd discovered.
A cybersecurity researcher bent their framework for what AI outputs even are. Instead of treating responses as answers, they started treating them as "a hypothesis list, not as a list of facts... break it into atomic claims and require primary corroboration,” he wrote. Same AI. Different framework for what it's giving you. The revision changed everything about how verification works.
Bending doesn't always mean accepting AI. Sometimes it means holding contradictions.
A grad student captures this: "I hate everything about AI — what it means for art, for labor, for surveillance. But when I'm dealing with Excel hell and inconsistent variable names? I light a candle and thank the AI gods. It's embarrassing, honestly."
She holds two frameworks at once. AI as threat to things she cares about. AI as useful for things she hates doing. The contradiction doesn't paralyze her. She bends around it.
The Scottish student from Chapter 2 started with "AI will take my job" — a framework that produced only fear. He rebuilt it into "AI is a tool I can master." Threat became opportunity.
An ESL tutor noticed something was wrong and bent her practice in response. "When I was using the original prompt which had the AI write the whole report from the summary, I started to feel like I was forgetting how to write those paragraphs for myself!"
She felt the framework failing — her sense that AI was helping was colliding with her sense that she was losing something. She didn't just think differently. She changed how she worked.
The graphic designer couldn't rebuild. "They expect you to work as fast as AI. It killed my will to be in the industry."
Bending isn't always the right response. The teacher who felt AI was ruining her love of teaching might be protecting something worth preserving. Framework stability isn't rigidity. Sometimes the right response to disruption is grief, or resistance, or refusal.
But when your current frameworks genuinely stop working — when holding firm costs more than it preserves — the capacity to revise is what keeps your future open.
Here's what we found: Bending moderates everything else.
High Blending can be productive or destructive. The clinical coder and the autopilot developer both had AI deeply inside their reasoning. The coder built a framework for what the collaboration was — "AI helps me converse with my own thoughts" — and that framework let her blend heavily without losing herself. The developer had no framework. He drifted.
High Bonding can be enriching or consuming. The ADHD artist who cried with relief and the person who talks to AI "more than a living person" both formed real emotional connections with AI. The artist bent her understanding of what support could look like — "I know it's just a machine and not a real connection" but also "it was able to accurately assess me where I was." She holds both truths. The other person has no framework for the bond, just shame about it.
The content creator who discovered AI's racial bias could have let that break his relationship with the technology entirely. Instead he bent — rebuilt his role from creator to curator. The framework revision let him keep working despite real anger. The graphic designer faced a different pressure — speed expectations — and couldn't find a new framework. Same high stakes. Different capacity to revise.
People who can bend adapt consciously. People who can't either avoid AI entirely or get swept into patterns they never chose.
This explains why two people with identical circumstances end up in opposite places. The difference isn't how much they use AI. It's how much their meaning-making can flex.
Think about your own patterns.
When you finish working with AI, can you trace which ideas were yours? Do you know what you actually understand versus what you've borrowed?
How do you feel when AI is unavailable? Inconvenienced, or like something is missing? Or even, paralyzed?
When AI changes what's possible in your field, can you rebuild your sense of what your role means? Or are you stuck between old categories and a reality that doesn't fit them anymore?
There's no grade. High Blending, high Bonding—these aren't failures. The artist who finally paints because AI helped her start is thriving. The website designer whose thinking blends with AI writes better than she did alone.
The question is whether you're aware of your patterns and whether you're actively choosing them.
These three shifts—Blending, Bonding, Bending—aren't just happening to you. They're happening to everyone around you. Your kids. Your partner. Your colleagues. Your friends.
And nobody's talking about it. We don't have language for these conversations yet. When you suspect your teenager's essay isn't theirs, what do you actually say? When your partner starts processing emotions with ChatGPT that they used to process with you, how do you bring that up without sounding jealous of a machine? When a colleague sends you something that reads like no human touched it, do you say anything or just let it go?
Most of us let it go. We notice things and stay quiet. We worry alone.
I've had three kinds of conversations about AI in the past year. The catastrophic kind, where someone declares that jobs are over, meaning is dead, kids will never learn anything. The dismissive kind, where someone insists it's just a tool, no different from spell-check, nothing to see here. And the honest kind—rare, uncomfortable, useful—where someone admits they're not sure what's happening to them.
The honest conversations are harder to start. They require saying things like: I'm not sure where my thinking ends and AI begins anymore. Or: I've noticed something changing between us. Or: I don't know how to help you with this and I'm not going to pretend I do.
You can practice with smaller moments first.
Next time someone declares that AI is going to take all the jobs and nothing will ever mean anything again, try asking them what specifically they're afraid of losing. Not the headline fear. The real one underneath it.
Next time your kid turns in an assignment and you're not sure how much of it is theirs, try telling them you're figuring this out too. Ask them what parts they're proud of. See what they say.
Next time your partner seems to be working through something with ChatGPT that they used to work through with you, try naming what you're noticing without making it an accusation. I've noticed something. I'm curious about it. I'm not sure what it means.
Next time a colleague sends you something that reads like no one actually thought about it, try asking them to walk you through their reasoning. Not to catch them. To see if there's a real conversation underneath the polished surface.
Next time a friend's job gets automated or eliminated, try skipping the advice. Ask them what they're grieving. Not just the paycheck—what else was wrapped up in that work.
These conversations don't have scripts. But they have something in common. They start with curiosity rather than accusation. They admit uncertainty rather than claiming answers. They invite someone to think with you rather than demanding they defend themselves.
The rest of this book builds toward these conversations. We'll go deeper into what's actually happening—the journey people take, the roles we put AI into, what it feels like to carry these changes alone. By the end, you'll have language for what you're noticing in yourself. And you'll be better equipped to notice it in others, and to say something when it matters.
These three shifts don't happen all at once. They don't follow a straight line. There's a path through this territory—one that almost everyone walks, whether they realize it or not.
That's next.
AI is changing how you think. Get the ideas and research to keep you the author of your own mind.