Stay Human Chapter 4 | Summit Speakers, Part 2 | The Coming Jackpocalypse? | Blaise Week | The Infinity Machine
Chapter 4 of Stay Human: The Journey People Take Helen's new chapter maps the territory that isn'
After the cube failure, I wanted to understand what had happened to me.
Not just the embarrassment of it. All of it. I'd been somewhere where I’d felt productive, confident, and generative. And then I was somewhere else entirely, unable to explain my own thinking. The transition happened without my noticing.
When we started mapping how people adapt to AI, I realized I wasn't alone. There's a path through this territory. It's not straight. People double back, skip stages, get stuck. But there are five places people find themselves.
Knowing them helps you see where you are.
The Wake-Up is when you realize AI can do something you didn't expect. My Wake-Up was embarrassingly simple. GPT-3, before ChatGPT even existed. I asked it to write a haiku.
That was it. A haiku. Looking back it's so 2022 I almost don't want to admit it. But language had always been the thing I was watching for. This was the unlock everyone in AI had been talking about for years. And when I read that haiku I thought: it's here.
I was the same as everyone else. Showing people. "Look at this." Not sure what it meant yet, just knowing something was different.
For a blind user, it happened in a restaurant bathroom: "They had no Braille on the restroom sign. I took a picture with my phone, had ChatGPT analyze it and it told me 'women's,' so I knew to go to the men's on the left."
A university lecturer captured the strangeness of first contact: "More like an extremely clever entity spitballing but not always remembering what it is discussing."
The Wake-Up can be wonder, fear, or both at once. Before, AI was an abstraction. After, it's real and it changes something about what's possible.
Some people circle The Wake-Up for months—returning to that initial amazement or dread without moving forward. Others pass through so quickly they barely notice it.
The Groove is where AI becomes routine. Not new, not threatening. Just there. For me, the Groove started when I began paying for ChatGPT with GPT-4. Suddenly I had a writing partner.
Not a dramatic one. A practical one. Turn this narrative into bullet points. Summarize this white paper. Explain electron shells as if I'm a Formula One fan — that one was my son's. Explain quantum mechanics for a mountain biker. Go up and down the comprehension levels: explain this to me as if I am a high school senior. Explain it as if I have a PhD in statistical mechanics. Explain this to me as if I am a seven-year-old. Change perspectives: critique this product launch pretending you’re Steve Jobs. What about as Lori Greiner? Or Barbara Corcoran? Iterate and collaborate: back and forth, back and forth.
It was just useful and part of how I worked. I wasn't amazed anymore. I wasn't worried. I had my ways of using AI and they were good.
That's the thing about the Groove. It feels like you've figured it out.
A system administrator described how it spread through his team: "Any time now in scrum or other meetings, if there's any question about something, we often just consult ChatGPT during our screen-share."
The Groove feels good. Efficient. Sustainable. You've figured out where AI helps and you use it there. You're not anxious about it. You're not amazed by it anymore either. It's infrastructure.
Most people spend most of their time here. It's the stable state—where you return after disruption.
But The Groove has a risk. Because it feels settled, you stop watching. You stop noticing how much you're blending, how much you're bonding. The Groove can drift into something else without your awareness.
The Merge is where boundaries get unclear. You’re working deeply with AI rather than just using it. And you may not be able to tell where you end and it begins. The Merge doesn't announce itself. You just realize one day that you're already in it.
I think the Merge crept in through my plant. Something was wrong with it. I uploaded a photo to ChatGPT and started asking questions — what's this discoloration, could it be overwatering, what about the soil? We chatted for a while and at some point I realized: I can't do without this anymore.
Not because diagnosing a sick plant is high stakes. Because of what it revealed about how I was now relating to information. Search had changed. AI was sitting between me and the internet in a fundamentally different way. I wasn't going to Google, scanning results, clicking links, piecing things together. Instead I was having a conversation. The boundaries between my question, the AI's response, and the information out there had gotten blurry in a way I hadn't noticed until that moment with the plant.
A programmer described what changes: "It's a type of flow state. When I do it on my own, I call it being 'in the zone'. When I am aided by an LLM, it's similar but amplified. Instead of looking at the programming source and logic, I'm looking at the 'thoughts' of the LLM to ensure it's approaching the problem correctly."
He went further: "Using AI for tasks feels like me being me to begin with. It feels like an extension of self, and in that manner, interactions feel natural, so what the AI is doing when I'm using it is also something I'm doing, therefore I am handling it myself... by using an AI model."
The boundary has dissolved. He can't distinguish "doing it himself" from "using AI" because they've become the same thing.
The Merge can feel expansive. Ideas flow more easily. Problems get solved faster. You feel more capable than you've ever been.
The Merge can also feel like delusion. Like the workshop participant who became convinced he understood a technical subject—until someone asked him to explain it without AI assistance.
Some people navigate The Merge consciously. They know what they’re doing and they watch the process. Others drift in and only recognize where they've been after something breaks.
The Breaking is when something gives way. Trust, confidence, identity, purpose. The frameworks you were using stop working and you don't have new ones yet.
Dave and I were showing a group of faculty how to use ChatGPT. This was very early, only a few months after ChatGPT came out. We had it build a personal deadline plan for a student — the kind of student who has no idea how to break down a big assignment. The prompt was something like: this assignment is due in two weeks, break it down into small daily tasks that get me to the deadline.
ChatGPT produced a beautiful plan. Clear steps, reasonable pacing, encouraging tone. We put it on a slide.
A math professor in the audience raised his hand. "That adds up to twenty-one days."
Not us. Him. We'd checked the logic, the flow, and the reasoning. But we made the mistake of trusting this AI to do the simple arithmetic. And it wasn’t true trust, it was just straight out not noticing, not checking. It happens; it’s human. But the room full of faculty who were there to learn about AI watched us learn something about AI in real time. There’s nothing like embarrassment to sharpen your sense of accountability.
A tenured professor experienced slow erosion: "ChatGPT is ruining my love of teaching. With every single assignment that comes in, I'm now questioning if a student used ChatGPT… I am in despair."
Their Breaking didn't come from their own AI use. It came from AI's presence in the environment—uncertainty introduced into something they had loved.
A writer experienced a different kind: "Sadly, what's changed the most is my motivation to work on original things. It's a lot harder to write something real from scratch, and there's a lot of AI competition out there."
Her identity wasn't disrupted by integration. It was disrupted by AI's existence in the market. She's competing against something that doesn't get tired, doesn't need money, doesn't struggle with motivation.
The Breaking can also come from external constraints. The blind user who found independence through AI vision experienced a partial closing when policy decisions limited other applications. His future opened through capability, then narrowed through restriction.
You can't stay in The Breaking forever. The pressure to resolve is too great. People either retreat—pulling back from AI, rebuilding walls—or they move toward something new.
The Rebuild started with a simple realization: I now had to proof work that wasn't entirely mine, but my brain still treated it as mine. Confirmation bias has always made us bad at proofing our own work. This was a new level. The words looked like they could be mine. The logic felt like mine. But some of it wasn't, and I couldn't see the mistakes the same way I'd see them in something I'd written from scratch.
So I started paying attention to where AI was actually helping me and where it was giving me false confidence. Here's what I found: AI is amazing at things you know nothing about. It's less impressive in areas where you're the expert. That sounds obvious. It isn't. Because the danger isn't in your area of expertise, where you'll probably catch the mistakes. The danger is in all the other areas — the ones where you don't know enough to realize AI just got it wrong.
The Rebuild is where you construct new frameworks.
The teacher who decided she'd "rather teach them to use the tool ethically than play whack-a-mole trying to catch it" shows what Rebuild looks like. She didn't reject AI or pretend it didn't change anything. She built a new understanding of what teaching means when AI exists.
A fantasy cartographer went through a complete arc. He started feeling guilty about using AI for reference images—it felt like cheating on his craft. Then he reframed: AI was just "a design tool that can help me get the perspective right, get the proportions right... and then I do my own thing."
The guilt dissolved because he built a framework that separated what AI contributes from what he contributes. The distinction let him use it without identity threat.
The grad student from Chapter 3—the one who hates everything about AI but lights a candle to "the AI gods" when it helps with Excel hell—she holds contradictions without needing to resolve them. AI threatens values she cares about AND helps with tasks she hates. Both are true.
The Rebuild often involves holding contradictions. The technology can be a threat and tool simultaneously. People who reach sustainable Rebuild don't eliminate tensions—they build frameworks capacious enough to contain them.
I've been through all five. Multiple times. I still am. With every new model. With every new reach into the possible.
The Chronicle research was a conscious Merge—I knew I was blending, I had accountability structures, I stayed the author. Then I drifted into an unconscious Merge with the cube work. That led to a Breaking when Dave couldn't follow my thinking. The Rebuild came fast because I had frameworks to catch myself with.
But I cycle back. Regularly. The Groove drifts into Merge when I'm not watching. Small Breakings happen when something I trusted doesn't work. Each time, I have to Rebuild a little.
The Wake-Up can lead directly to The Merge when fascination overwhelms reflection. A novelist in our research described the speed: "I was not an avid user of AI until three weeks ago when I first tried ChatGPT and realized its power to change my life as a writer." Three weeks from non-user to life transformation. That's Wake-Up to Merge with almost no Groove in between. The amazement pulled her in before she could watch where she was going.
The Groove can shift suddenly to The Breaking when routine use creates unexpected challenges. The tenured professor had AI integrated into her teaching workflow. It was working. Then she started questioning every assignment. Remember her despair. She thought she had it figured out. The Groove had hidden a problem that only surfaced over time.
The Merge can feel like The Groove if you're not paying attention. The clinical coder and the autopilot developer both had AI deeply integrated into their work. From the outside, both might look like they're in a comfortable Groove. The difference is internal. The coder knows she's in The Merge and watches the process deliberately. The developer didn't realize he'd merged until he couldn't explain his own code. Same apparent state. Different awareness.
Context matters. You might be in The Groove at work—using AI for clearly bounded tasks, maintaining easy separation—while experiencing The Merge in creative projects where the boundaries have dissolved without your noticing. You might have reached Rebuild about your professional identity, finding a new framework that works, while still stuck in The Breaking about what AI means for your field or your kids' futures.
The ESL tutor from Chapter 3 shows how states can shift within a single domain. She was in The Groove—AI writing her report summaries, everything working fine. Then she noticed something: "I started to feel like I was forgetting how to write those paragraphs for myself!" That feeling was the edge of Breaking. She caught it early enough to Rebuild her practice before the crisis deepened.
I kept asking myself: is this actually different? Or are we just going through what humans always go through when something new shows up?
Because we've seen these stages before. They're the same ones people move through when they relocate to another country. Or spend a year abroad. Or even just figure out the subway system in an unfamiliar city.
You know how that goes. At first you're hyper-aware of everything—which line, which platform, which exit, how to buy a ticket. You feel foreign. Then it starts to click. You find your route, your rhythm. It becomes automatic. You stop noticing you're doing it. Then one day there's a strike, or construction, or you end up somewhere you've never seen, and you realize how much you'd been relying on a system you never fully understood. You feel foreign again. And then you rebuild—with more awareness this time, more humility about what you know and don't know.
Maybe that's all this is. The familiar human process of absorbing something new. We've done it with electricity, telephones, cars, computers. Every generation metabolizes technologies that seemed impossible to the one before.
But when people in our research describe AI, they don't talk about it like they talk about their car or their phone. They say things like: "It feels like working with someone." "I'm not sure which ideas were mine." "It's like having a thinking partner who's always available."
Previous technologies extended what we could do. AI participates in what we think.
That might be why the familiar stages feel so much more personal this time. You can learn a new transit system without questioning who you are. AI gets into the process of thinking itself. The foreign territory isn't out there. It's in here.
I don't know if that makes AI truly different or just feels different because we're living through it. We're too close to see clearly. But the stages are real. People move through them. And knowing where you are helps—not because it gives you control, but because it gives you a chance to pay attention.
Think about your relationship with AI right now. Not what you think you should feel about it. Where you actually are.
Signs you're in the Wake-Up:
You're still surprised by what it can do. You find yourself showing other people—"look at this." You're experimenting, testing boundaries, not quite sure what to make of it. The possibilities feel exciting or unsettling or both. You haven't settled into a pattern yet because you're still figuring out what this thing is.
Some people stay here for months. They keep having the same "wow" moment without moving into regular use. Others pass through so quickly they barely register it—they go straight from first encounter to daily habit.
Signs you're in the Groove:
AI has become part of how you work. You don't think about it much. You have your uses—drafting, research, brainstorming, whatever—and they feel settled. You're not amazed anymore. You're not anxious either. It's just there, like email or your calendar.
The Groove feels stable. That's what makes it tricky. You stop watching because there's nothing to watch. The question worth asking: when did you last check whether your groove has drifted somewhere you didn't intend?
Signs you're in the Merge:
Ideas flow easily. You're more productive than you've ever been. The back-and-forth with AI feels natural, almost like thinking itself. You might not be able to trace which ideas were yours and which emerged from the conversation. That might not bother you. Or it might bother you a lot.
The Merge can feel expansive—like you've unlocked something. It can also feel like losing your footing. The clinical coder from earlier in this chapter is in the Merge and knows it. The developer who autopiloted for days was in the Merge and didn't know it until later.
One question that helps: Can you explain what you're working on to someone who wasn't in the AI conversation? Not just the output—the reasoning. If you can walk through the logic, the understanding is yours. If you can only point to what you produced, you might be borrowing more than you realize.
Signs you're in the Breaking:
Something isn't working anymore. Maybe it's trust—you believed AI could do something and it failed you badly. Maybe it's confidence—AI capabilities are making you question what you're good for. Maybe it's meaning—the thing you built your identity around doesn't feel solid anymore.
The Breaking shows up as exhaustion, confusion, anger, grief. The tenured professor who stopped loving her job. The 3D artist who lost the reason he became an artist. The graphic designer who said AI "killed my will to be in the industry."
As I mentioned, you can't stay in the Breaking forever. The key question is how do you move toward something new?
Signs you're in the Rebuild:
You're constructing something. New boundaries, new frameworks, new ways of understanding what your work means when AI is part of the picture. You're not just using AI or avoiding it. You're thinking about the relationship and making deliberate choices.
The teacher who decided she'd rather teach students to use AI ethically than play whack-a-mole catching cheaters. The fantasy cartographer who worked through his guilt and landed on "AI helps with perspective and proportion, then I do my own thing." The grad student who hates everything about AI and still thanks the AI gods when it helps with Excel hell.
The Rebuild often involves holding contradictions. The technology can be threat and tool at the same time. People who reach sustainable Rebuild don't eliminate the tensions—they build frameworks big enough to contain them.
You're probably in more than one place.
Most people are. You might be in a comfortable Groove at work—AI handles specific tasks, everything runs smoothly—while experiencing the Merge in a creative project where the boundaries have gotten blurry. You might have Rebuilt your sense of professional identity while still Breaking about what AI means for your field or your kids' futures.
The states aren't a checklist where you complete one and move to the next. They're territory you move through, sometimes circling back, sometimes occupying several at once depending on context.
And if you haven't really gotten into AI yet—
That's its own position worth noticing.
Maybe you've been deliberately holding back, watching from a distance while others dive in. Maybe the noise around AI has been so loud and contradictory that you've tuned it out. Maybe you tried it once, felt underwhelmed or unsettled, and haven't gone back. Maybe it just hasn't shown up in your work yet in a way that demanded attention.
You're not behind. The Wake-Up might still be ahead of you. Or you might be in a place of resistance—a deliberate choice to stay out that's worth understanding. Some people in our research made that choice consciously. They could articulate what they were protecting and why. Others were avoiding something they sensed but couldn't name.
Either way, you're not exempt from the questions this book is asking. AI is reshaping the environments around you—your workplace, your kids' education, your relationships with people who are using it. You're adapting to AI whether you use it or not.
Here's something I've learned: it's easier to see these patterns in other people than in yourself. That's useful. Not as judgment—as practice.
The person who can't stop showing you things.
They're in the Wake-Up. Still amazed, still figuring it out. They want to share the surprise. This is often teenagers with parents, or that one colleague who discovered ChatGPT six months after everyone else and acts like they invented it.
What helps: genuine curiosity. "What are you using it for? What's surprised you?" Let them stay in the exploration without rushing them toward conclusions.
The person whose output changed.
Their emails got smoother. Their reports got longer and more polished. Their presentations have that slightly frictionless quality. You're not sure they're using AI, but something shifted.
They might be in the Groove—AI integrated, working well. Or they might be drifting toward Merge without realizing it.
What helps: asking about process, not product. "How did you approach this?" or "Walk me through your thinking." Not to catch them—to see if they can.
The person who's struggling but won't say why.
Their confidence has dropped. They're defensive about their work. They're avoiding certain topics or conversations. Something seems off, but they're not talking about it.
They might be in the Breaking. AI might have disrupted something they haven't figured out how to rebuild.
What helps: making space without pressure. "I've been thinking about how AI is changing things. I'm still figuring out what I think. Are you?" An invitation, not an interrogation.
The person who's figured something out.
They use AI heavily and seem fine with it. They can explain their reasoning. They know where they rely on AI and where they don't. They've thought about the relationship and made deliberate choices.
They might be in the Rebuild—or at least a stable, aware version of the Merge.
What helps: asking them to teach you. "How do you think about it? What have you learned?" People who've done the work usually have something worth hearing.
The person who seems too confident.
They have answers for everything. AI is either obviously great or obviously terrible. They're certain they won't be affected, or certain everyone will be replaced. No ambivalence, no uncertainty, no questions they're still sitting with.
Be curious about what the certainty is protecting. Sometimes confidence is genuine. Sometimes it's armor against questions that feel too hard to hold.
Knowing where someone is changes how you approach them.
When my colleague was in the Wake-Up—excited, showing me everything, convinced this was going to change her whole field—I made the mistake of jumping to caution. I started talking about drift and dependency and all the things I'd been researching. Her face closed. She heard me saying her excitement was naive. That's not what I meant, but that's what she received.
What I've learned since: people in the Wake-Up need room to explore. They're not ready for warnings about where the road leads. They're still figuring out what this thing is. The useful move is curiosity that matches their energy. "What are you finding? What's surprised you?" Let them discover the edges themselves. They will. And when they do, they'll remember you as someone who was interested, not someone who tried to shut them down.
People in the Groove are harder to reach. They've settled into something that works. AI has become background—efficient, unremarkable. The problem is you can't raise questions without sounding like you're questioning their competence. "Are you sure you're not drifting?" lands as "I think you're doing this wrong."
What I've found works better: asking about their experience, not their practice. "Has anything surprised you lately about how AI is affecting your work?" or "Do you ever wonder if the groove has shifted without you noticing?" You're inviting reflection, not delivering diagnosis. Some people will shrug and say everything's fine. Some will pause. The pause is where the real conversation starts.
The Merge is tender territory. Someone in the Merge might feel more capable than they've ever felt. Ideas flowing, productivity high, something unlocked. If you come in with concern, you're raining on a parade they didn't invite you to.
But some people in the Merge are starting to feel uneasy. They sense the boundaries blurring. They just don't have language for it yet. With them, naming what you're seeing can be a gift. "I've been thinking about how hard it is to know which ideas are mine anymore when I work with AI. Do you ever feel that?" You're offering your own experience, not diagnosing theirs. If it resonates, they'll tell you. If it doesn't, you haven't accused them of anything.
Someone in the Breaking needs something different entirely. They're not looking for input on their AI relationship. They're dealing with loss—of confidence, of purpose, of an identity that used to make sense. The instinct is to help, to suggest solutions, to point toward the Rebuild you can see from outside.
Don't.
What I've learned—mostly by getting it wrong—is that people in the Breaking need presence more than advice. "That sounds really hard" goes further than "have you tried thinking about it differently?" They need to know someone sees what they're going through without rushing them toward resolution. The Rebuild will come. It's not your job to push them there.
People who've reached the Rebuild are the ones to learn from. They've done the work. They've figured out how to hold contradictions, how to set boundaries that make sense for them, how to stay themselves while using AI in ways that actually help. Ask them to teach you. "How do you think about it? What did you have to figure out?" Most people who've rebuilt are generous with what they've learned. They remember how disorienting it was before things clicked.
And then there's the person who hasn't really started. They're watching from outside, maybe skeptical, maybe just busy with other things. The temptation is to evangelize or warn—to pull them into the conversation you're already having.
Resist that too. They'll engage when they're ready, or when their circumstances force the question. What you can offer is honesty about your own experience. "I've been thinking a lot about how AI is changing my work. It's strange territory." You're not telling them what to think. You're letting them know the door is open if they want to walk through it.
The point isn't to grade yourself or anyone else. The point is to see what's happening—in you, in the people around you—clearly enough to have honest conversations about it.
One thing worth saying before we move on, and this is at the risk of adding another metaphor but I think this is important to understand. The states describe what the water feels like — calm, rough, disorienting, clear. But the water changes partly because of choices you make about how AI fits into your work. When you start letting AI into your actual reasoning process, or when your sense of professional identity changes around it, or when the meanings of your work start moving — those structural changes can push you into different seas. You might be cruising in the Groove and then realize your professional identity reorganized while you weren't paying attention, and suddenly you're in the Breaking.
The states are real. They're what you experience. But they don't just happen to you like weather. Some of them happen because something changed in how you and AI are actually working together — and that's what we need to look at next when we talk about roles. Because while you're moving through these states, you're also casting AI in a part—giving it a job to do in your life. And the role you give it shapes everything else.
AI is changing how you think. Get the ideas and research to keep you the author of your own mind.