Chapter 1: Our Story
My stepfather was a physician, and he had no patience for mysticism about expertise. When Malcolm Gladwell's book on thin-slicing came out—the idea that experts can make accurate judgments in seconds, on very little information, as if reading the situation through some kind of rapid unconscious processing—everyone started talking about intuition as if it were a gift. My stepfather was dismissive. "It's experience," he said. "That's all it is. You see enough patients, you recognize the patterns. There's nothing magical about it."
He taught me to recognize this in medicine—how you escalate a complex case up the chain until you reach the person with the most experience, and they know what's going on. Not because they're magic. Because they've seen it before. Medicine is apprenticeship, he said. The handoff of expertise from one generation to the next is how the whole thing works.
This is how I got interested in expertise and intuition and decision science. How people develop judgment. How experience becomes wisdom. What happens in the gap between knowing the textbook answer and knowing what to do.
So when I started studying how people develop expertise while using AI, I thought I was the right person for the job.
Reading this alone? Join others who are working through what it means to stay human while working with AI.
My husband and business partner, Dave Edwards, and I had been working on the human experience of AI for a decade before ChatGPT. Mostly predictive systems—recommendation engines, bias in algorithmic decisions, how people actually make choices when data is unclear. We knew the research on decision science: feelings come first, intuition fills the gaps, humans are not the rational actors we pretend to be.
When generative AI arrived, we pivoted our workshops to include AI. And we started noticing something strange.
People had predictable reactions. Not just similar—predictable. At exactly the same moment in the workshop, we could anticipate what would happen. Surprise here. Amazement there. Then anxiety. Confusion. Sometimes anger. Often delight. Across hundreds of hours and more than a thousand people, the pattern held.
This felt worth studying. The big question underneath it all: Is this actually different? How is working with a large language model different from everything we know from cognitive science, human-computer interaction, neuroscience, all the social sciences? Maybe it isn't different. Maybe these are familiar patterns under new conditions.
But it feels different to people. In their own words, something new is happening. The introduction of a truly cognitive technology—one that talks back, that seems to think, that produces ideas—was changing how people experienced their own minds.
A programmer captured it precisely: "Using AI for tasks feels like me being me to begin with. It feels like an extension of self, and in that manner, interactions feel natural, so what the AI is doing when I'm using it is also something I'm doing, therefore I am handling it myself... by using an AI model."
So we collected stories. From our workshops, from the internet, from anyone willing to tell us what was happening to them. We analyzed them to see what patterns emerged. That work became The Chronicle—our attempt to map the psychological territory people were traversing as AI became part of how they think.
The Chronicle research was intensely collaborative with AI. I was deep in conversation with Claude, generating analyses, testing frameworks, exploring patterns in transcripts while Claude did the same. The work was integrated in ways I'd never experienced before. I couldn't always tell which insights originated from me and which emerged from the conversation.
But something kept me grounded.
Over the period of a couple of months, I had to surface and be able to make sense to humans who hadn't been in the AI conversation. Don Norman pushed back on my framing of human-computer interaction. Steve Sloman challenged my reasoning of our individual and group psychology. Barbara Tversky questioned pretty much everything. I had many more reviewers and I am eternally grateful for their help and critique. None of them were checking Claude. They were all checking me.
And because I knew I'd have to explain myself to them, I worked differently. I stayed in charge of the questions. I decided what mattered. I used AI to explore veins of theory I'd chosen, not to wander wherever it pointed.
I think—and this is a soft claim—that I achieved something rare during that work. What we came to call the Co-Author role, where a human places AI in the role of Co-Author. Where deeply blended thinking, real identity bonding in the collaboration, and flexible, bendy, meaning-making happen. In this case, human authorship remained intact. I was able to explain what I was doing, what it meant, and why it changed my sense of my professional status.
It worked because humans were holding me accountable. Not because I was disciplined or vigilant. Because other people were in the loop.
Then I forgot.
This was a few months later and I was trying to figure out how three psychological shifts—how easily AI blends into your thinking, how closely your identity gets tied to it, and how flexible you are at revising your mental frameworks—connect to skill development. Dave and I needed to present this at an upcoming workshop. It was complicated and I couldn't see the shape of it.
So I did what had been working. I set up projects in Claude and ChatGPT and started generating. Frameworks. Decision trees. Tables comparing automation and augmentation. I had agents analyzing data. I was producing new ideas faster than I ever had before. It felt amazing—like having another brain doing the heavy lifting while I directed.
But this time I was alone with the machine. No peer reviewers scheduled. No accountability checkpoints. Just me and the AI, going deeper and deeper. Then Dave asked me to walk him through what we were going to present. We had two hours.
I started explaining. He looked confused. He asked questions. I couldn't answer them. Not because they were hard questions—because I didn't actually know what I was talking about. The frameworks I'd generated made sense when I was inside them with the AI, but I couldn't reconstruct the logic on my own. I couldn't explain why any of it mattered.
He got frustrated. "What does this mean? Why can't you explain it? Why are you doing all of this with AI when you have no idea what's actually real?"
Then he took the three shifts I'd been trying to connect—Blending, Bonding, Bending—and put them on three axes. A cube. Eight corners. Eight possible roles people could occupy when working with AI.
This took him ten seconds. Maybe less.
A human made sense of something in no time that had taken me hours and thousands of tokens to turn into an incomprehensible mess.
I have a pattern. I go from A to D and skip B and C. The connection is obvious to me, invisible to everyone else. So much so that one senior executive once said that I had an uncanny ability to link the unlinkable and I am not sure he meant it as a complement.
When people ask me to backtrack and explain, I can get defensive. One co-worker nicknamed me “three strikes Helen”, meaning that if you asked the same thing more than twice be prepared to be sanctioned.
This is a bad reaction and I've worked on it forever. The best collaborators I've had are patient with my initial bristling, because once I do explain B and C, we're fine. Dave is the best—patient, logical, able to avoid a strikeout—and this is why we work professionally.
But most people don't wait. They decide I'm too abstract, too intellectual, not a team player. And getting rejected from the team is painful to me. More painful than I usually admit.
In that moment with Dave, I had set up my own rejection. AI had amplified a known weakness. I had spent hours going from A to D to Q to somewhere I couldn't even name, and I had no B and C to offer. Worse, the deadline meant there was no time to be patient with me. His frustration was "here we go again"—but it was sharpened by the fact that I had chosen Claude over him. I had valued the machine's output more than my partner's thinking.
We fixed it fast. His cube was so good it became one of three key frameworks in our research. I presented it at our 2025 Summit. The crisis lasted maybe thirty minutes.
But it stayed with me. Not because I failed—I fail all the time. Because I had something working and I abandoned it. The Chronicle collaboration had all the pieces: deep AI integration, human accountability, authorship maintained. But when I needed to solve a hard problem quickly, I threw away the accountability part. I kept the AI and dropped the humans.
And I felt so capable while I was doing it. The AI made me feel smart, productive, like I was operating at a higher level than usual. But none of it was real.
Here's what I want you to understand: the problem wasn't that I used AI. The problem was that I stopped setting my own direction.
I wasn't deciding what questions mattered. I wasn't choosing which frameworks to pursue. I wasn't authoring my own goals. I was following wherever the AI led, and it led me in circles that felt like progress.
That's the core of everything in this book. You can work with AI constantly—deeply integrated into your thinking, closely tied to your sense of what you're capable of, flexible about how you make sense of the world. All of that can be good. Extraordinary, even.
But only if you're still the one deciding where you're going.
I call this human authorship. When we are being more technical we call it cognitive sovereignty. It means remaining the author of your own mind—not by keeping AI at a distance, but by staying in charge of your goals even as you collaborate deeply.
The Chronicle work proved it was possible. The Cube failure proved it was fragile—that even someone who should know better can slip when the accountability structures disappear.
And studying this is the core of our research. We want to know how people are experiencing AI so that we can help more people stay human even as AI gets more powerful, less visible, and more ubiquitous. This book is making our research accessible to more people and, because it’s based on stories, we think it’s highly relatable.
As always seems to be the case with AI, we found that there is no single answer. The story of AI is a story of duality.
We found that the people who thrived with AI weren't the ones who used it least or limited their use to certain areas. Often they used it most—deeply, constantly, woven into their daily thinking. They thrived because they stayed the authors of their own goals.
We found teachers who rebuilt what education means to them, and teachers who broke under the same pressure. We found scientists extending their thinking beyond what any individual mind could reach, and scientists hitting walls they couldn't explain. We found writers protecting what makes their work theirs, and writers who lost the thread entirely. We found people using AI to grieve—training chatbots on their dead loved ones' messages—with results that ranged from profound comfort to devastating harm.
We found people whose futures opened through AI and people whose futures closed. Often the difference came down to one thing: whether they could see what was happening to them clearly enough to stay in charge of where they were going.
We found a framework that helps. Three shifts to notice in yourself. Five states people move through. Eight roles you might be putting AI into. And three things that keep you the author: noticing what's happening, choosing deliberately, and showing up for the people who can see what you can't.
This book exists because I watched myself lose my own thinking, then found my way back, then worry about losing it again. It exists because I've now watched more than two thousand other people navigate the same thing—some beautifully, some painfully, most somewhere in between.
I'm not going to tell you to use AI less. I use it constantly. The clinical coder you'll meet in this book uses it constantly. The scientists, writers, and professionals throughout these pages use it constantly. We shouldn’t be so concerned about using AI. What we should be concerned about is whether you're authoring the collaboration or drifting into it.
Here's what we're going to do together:
First, we'll look at what's at stake. Why human authorship matters. What you're protecting when you stay in charge of your own goals—and what becomes possible when you do. This is about what you can reach.
Then, we'll name the three shifts happening in you. We call them Blending, Bonding, and Bending. How AI mixes into your thinking. How your identity gets tied to what AI lets you do. How your frameworks for making sense of the world flex and change. You're experiencing all three right now, whether you've noticed or not.
We'll map the journey people take. Five states: Wake-Up, Groove, Merge, Breaking, Rebuild. You're somewhere on this map. Knowing where helps you navigate consciously instead of stumbling through.
We'll show you the eight roles you're putting AI into—a framework that came from Dave in ten seconds when I couldn't explain my own thinking. That framework revealed something I hadn't expected. Most conversations about AI focus on efficiency. How do I get more done? How do I work faster? These are real questions. But they miss the bigger story. The promise of AI isn't that it handles the productive work so you can go knit or find hobbies or discover "new ways to be meaningful to each other." That narrative gets technology wrong. Technology extends us. It always has. Writing extended memory. Telescopes extended sight. AI extends cognition—thinking itself. The promise isn't doing the same work faster. It's reaching places you couldn't reach alone. Becoming more capable, more creative, more fully yourself. That's what's actually at stake in how you position AI. And most people are leaving it on the table.
We'll talk about what this actually feels like. The pride and the shame. The relief and the grief. The loneliness of carrying emotions about AI that you haven't told anyone. You're not alone in any of it.
We'll help you have the conversations you've been avoiding. How to talk to your teenager when you suspect their essay isn't theirs—and you're not sure you want to know. How to tell your partner that their late-night ChatGPT sessions feel like they're replacing you. What to say to the colleague who keeps sending you AI-generated drafts that read like no one actually thought about them. How to bring your AI research to a doctor's appointment without sounding like a WebMD hypochondriac. How to support a friend whose job just got automated without pretending everything's fine.
We'll build a framework for staying the author. Three things matter: noticing what's happening, choosing deliberately, and showing up for others. This is simpler than it sounds and harder than you'd think.
We'll explore what happens when your AI relationship diverges from the people around you—and why making your meaning-making visible to others might be as important as the three shifts happening inside you.
We'll look at the prize. What becomes possible when you collaborate deeply with AI while remaining the author of your own mind. Doctors giving better care. Scientists reaching further. Writers finding new capacities. Teams building things no individual could build alone. This is extraordinary—and it requires that you stay in charge.
And I'll give you practices. Exercises. Conversations to have. Prompts you can use with AI itself to help you stay the author. Think of this as a toolkit.
Each chapter ends with something you can do. The practices build on each other. By the end, you'll have a way of working with AI that's yours—consciously chosen, grounded in what you've learned about yourself.
Finally, this book doesn't start with the questions that dominate most AI conversations—will it take our jobs, will it destroy democracy, will it end humanity. Those questions matter. I think about them constantly. And I'll be honest: there's so much noise around AI that even I struggle to hold a steady position. One week the evidence points one way, the next week something changes. That's not a failure of nerve. It's what radical uncertainty actually feels like.
But I don't think that lets me off the hook. In the final chapter, I'll share where I land on the big questions. I might be wrong. I'm prepared to be wrong. I most likely will be wrong—in practice even if I’m right in theory. I'm saving that for last because those are questions about what might happen. This book is about what's already happening—to you, to the people you love, to the way you think.
Before we go further, these are some of the people you'll meet. I’ve mentioned some already. Some of these people I've talked with face to face. Some I’ve listened to in workshops. Others I've never met—they left their stories on forums, in public transcripts of AI conversations, in the places people go when they're trying to make sense of something new. Their experiences are as real as anyone's. You don't have to know someone to learn from what happened to them.
The two teachers who faced the same disruption and went in opposite directions—one whose love of teaching broke, one who rebuilt what teaching means.
People who trained AI on their dead loved ones' messages. The results weren't what they expected.
A clinical coder in a large hospital who uses AI constantly and remains completely in charge of her own expertise. She'll show you what authorship looks like in practice.
You'll meet an econometrician in New Zealand who pushed AI to extend their thinking and discovered exactly where it falls apart.
You'll meet field researchers whose daily work is changing underneath them—some thriving, some struggling, all figuring it out in real time.
You'll meet me. More of me than I usually share.
And you'll meet yourself. In the patterns we describe, in the states we map, in the roles and the tradeoffs. This book is a mirror as much as a guide.
One more thing before we begin.
In 2016, our kids asked us what they should study if AI was going to do everything. We didn't have an answer. That question launched what became a decade of research, hundreds of conversations, and eventually this book.
What would I tell them now?
That the thinking that matters most is the thinking you do yourself—even when you're doing it with AI. That the relationships where people know you well enough to say "that doesn't sound like you" are worth more than any productivity gain. That you'll be fine, because humans have navigated every previous shift, and you'll figure out things my generation can't imagine.
But also: pay attention. Something real is happening. The window for understanding it clearly might be shorter than we think.
Stay curious. Stay connected. Stay the author.
Let's begin.