AI is Already Changing Your Consciousness
AI changes what you can know, which changes what feels meaningful to you. If consciousness is the subjective experience of meaning-making in action, then AI absolutely changes consciousness.
AI changes what you can know, which changes what feels meaningful to you. If consciousness is the subjective experience of meaning-making in action, then AI absolutely changes consciousness.
What if we told you that using AI alters your consciousness? Not in a psychedelic way. Not because you think ChatGPT is conscious, although many people do. But because AI changes what you can know, which changes what feels meaningful to you. If consciousness is the subjective experience of your meaning-making in action, and meaning-making is you making inherently subjective decisions about value, importance, and what matters to you, then AI absolutely changes consciousness.
Human relationships with AI are fundamentally different from anything we've seen before. Our research with over a thousand people across hundreds of hours shows that traditional frameworks for understanding relationships—whether with people or technology—don't capture what's actually happening when we live and think with AI.
But first, let's define consciousness. For us, consciousness is the subjective experience of meaning-making. When working with AI, it's that felt moment when you realize this strange intelligence is different. When you notice the relationship is changing you. That's what we see in our research—people saying their thinking, their feeling, their entire experience with AI feels different.
One man discovered his girlfriend was feeding their arguments into ChatGPT to craft better comebacks. He felt betrayed—he wanted to fight with her, not some AI ghostwriter. A woman processing grief found herself laughing at messages from AI trained on her deceased sister's texts, then crashing when she remembered none of it was real. Teams in our workshops started calling AI "the third in the room"—not quite a person, not quite a tool, but something that actively shapes how decisions get made.
Now here's the thing—they all experienced something different yet something fundamentally the same. Each person felt the moment when meaning shifted. The boyfriend realized his fights weren't really with his girlfriend anymore. The grieving woman felt joy turn to emptiness in real time. The teams sensed a new presence shaping their choices. These aren't just stories about AI—they're snapshots of consciousness in action.
Because meaning can't exist without someone to experience it. When you grasp that your argument partner is actually an algorithm, or when laughter becomes grief, or when a "third" enters your meeting room—that's consciousness doing what it always does: making sense of experience as it happens. Every conscious moment involves this same interpretive work, this seeing-as rather than just seeing.
The "aha" of recognition, the shift in understanding, the felt quality of realizing something has changed—that's consciousness and meaning-making as the same phenomenon, just described from different angles.
Consciousness is both our starting and ending point. Our research shows the best AI interactions happen through being conscious. People who thrive are self-aware and engaged, not automatic.
Yet current AI design treats consciousness as inefficiency to bypass rather than capacity to support. This misses what's actually happening in human-AI interaction. Our research suggests consciousness may be crucial for preserving human agency in hybrid cognitive systems.
There’s a tension here. We naturally want to minimize energy, craving automation and unconscious ease. But thriving with AI requires more awareness, more active meaning-making. How do we design for this contradiction?
How we design AI interactions will shape how human awareness evolves. Instead of designing AI systems that disappear from awareness, we need to make them more consciously collaborative.
Here's what's coming: We'll explore how media changes consciousness, why AI represents a fundamentally new kind of media, and how all media serves as both storage and offloading system. We'll show why organic and synthetic intelligence share the same base operations, making symbiosis inevitable. Then we'll explore our research findings about people's new sense of plasticity when working with AI. Finally, we'll tackle the design problems for consciousness-altering AI and offer a new approach: neosemantic design—literally designing for new meaning.
Marshall McLuhan argued that media reshape consciousness by altering the very processes through which meaning is formed. The invention of print introduced a new cognitive architecture—one that emphasized linearity, sequence, and abstraction. Readers learned to follow arguments across pages, detached from the presence of the speaker or author. Broadcast media introduced simultaneity and immersion, creating shared experiences that unfolded in real time across vast distances. These shifts didn’t simply change what we consumed—they restructured how we thought, felt, and connected.
Across these historical shifts, one thing remained constant: media served as conduits for meaning created by other people. Writing offered access to distant thoughts. Print organized knowledge into fixed forms. Broadcast synchronized attention across space and time. In each case, the medium shaped perception, but humans remained the originators of meaning. We looked through media—as we peer through windows—into worlds constructed by other minds.
AI changes this. Unlike previous media, which transmitted human-authored meaning, AI systems actively participate in meaning-making. They draw on the compressed patterns of human knowledge, language, and culture, but what emerges is new—shaped by context and interaction. Meaning is still shaped through its delivery in AI systems but we are also experiencing something new: it is co-authored in the moment, through collaboration between human and machine.
These systems compress vast reservoirs of human expression into generative capacities that feel deeply personal yet structurally alien. The result is not a message in transit, but rather a live negotiation of meaning in high-dimensional symbolic space.
In this sense, AI represents something unprecedented in McLuhan's framework. We're no longer looking through windows, we are dwelling in rooms. These aren't transparent viewing portals into preexisting meaning but immersive, co-creative, symbolic environments. While earlier media shaped what we can see, AI systems provide a shared space for meaning to be made.
McLuhan showed us that media shape consciousness by shaping how we access meaning. But AI goes further—it shapes consciousness by being a participant in meaning creation itself. If previous media were the message, this medium is the meaning.
We now dwell in symbolic spaces, exploring and generating ideas in real-time collaboration with systems that respond, adapt, and synthesize alongside us.
Consider, for example, your experience reading this essay. You’re accessing meaning through the window of a magazine—bounded by the constraints of format, length, and linear narrative. But if you were to explore these same ideas with an AI system, you would be co-creating the experience itself. Here, the interface becomes less like a page and more like a symbolic partner as it responds to curiosity with generative depth. Meaning would emerge, shaped by your curiosity and the system’s generative capabilities, producing something that didn’t exist before.
But this shift comes with new challenges. When the boundaries between knowledge, creativity, work, and emotion dissolve, we risk symbolic overload. AI systems compress enormous symbolic reservoirs—language, culture, emotion—into fluid outputs. Without boundaries, this symbolic fluidity risks overwhelming our capacity to orient, discern, or stay grounded in what matters.
So the question becomes: how do we inhabit these hybrid symbolic environments without losing coherence, memory, or the human ability to know what matters?
The extended mind thesis from Andy Clark and David Chalmers shows that cognitive processes genuinely extend beyond the brain.[1] When you use your smartphone to remember directions, thinking literally happens across your brain and the device.
Most extended mind research focuses on cognitive offloading—using external systems while keeping human control. What we're observing goes beyond this. The grieving woman engages in meaning-making where the AI contributes interpretive content she couldn't generate alone.
Michael Levin shows that cognition operates throughout biological systems without neural networks.[2] Blaise Agüera y Arcas extends this to AI systems.[3] Both document cognition operating across radically different substrates, pointing toward substrate independence.
But consciousness is different from cognition. Different theories make different predictions about hybrid consciousness:
If consciousness emerges from complex information processing, then substrate independence might apply—consciousness could emerge from human-AI collaboration.[4]
If consciousness requires integrated information across a unified system (like IIT suggests), then hybrid consciousness would need genuine integration, not just coordination between separate systems.[5]
If consciousness is fundamentally biological, then AI collaboration remains cognitive enhancement, not consciousness expansion.[6]
If consciousness depends on higher-order representations of mental states, then hybrid consciousness would require the system to represent its own hybrid processing.[7]
We don't know which theory is correct.[8] But we can examine what people report experiencing when they work with AI and see what patterns emerge. So that's what we did.
Since ChatGPT’s release, we’ve spent hundreds of hours with people learning, experimenting, and making sense of their experiences with AI. We began to notice recognizable patterns—the “aha” moments, the quiet shocks, the “oh my god, this changes everything” moments. Sometimes, there was an unravelling—a crisis of meaning, followed by a subtle reconstruction.[9]
We’ve talked about some of these stories so far: the couple’s new kind of conflict because only one partner was using AI; the woman processing grief through a generative ghost of her sister; the human team navigating a working relationship with a machine employee. But beyond those cases, we’ve seen a wide spectrum of experiences—each showing how differently people absorb ideas from AI, and how profoundly their sense of self can shift through that interaction.
What stands out most isn’t how people use AI, but how they construct new meaning with it. Some are able to rapidly assimilate unfamiliar frameworks, apply new concepts in the real world, and move beyond the edges of their prior expertise. Others find themselves rethinking their identities, reframing what’s possible, and reorganizing their mental maps.
We call this capacity Symbolic Plasticity—a kind of cognitive flexibility that governs how people revise fundamental value hierarchies when AI challenges their existing frameworks for what matters and why. Yes, it’s intellectual but it’s also emotional and embodied. It determines how people absorb ideas from AI and how they come to understand both their own role and the role of these new systems.
Symbolic Plasticity develops through contact—through repeated, often unstructured interaction with AI systems. It shows up not as agreement or belief, but as a kind of symbolic permeability: a willingness to reshape how meaning is formed and where it comes from. What we’ve seen is that this shift isn’t abstract. It happens in language, in behavior, in self-perception. It’s not just something people think—it’s something they feel. And that feeling, that movement in meaning, is what we understand as consciousness in motion.
Take the fighting couple. The girlfriend begins to shift how she understands self-expression. Where she once equated authenticity with unfiltered emotion, she starts to see that her most accurate articulation of feeling might emerge in collaboration with AI. It doesn’t feel artificial to her. It feels clearer—like the words finally match the interior sense. Through the interaction, she begins to experience AI not as a layer between herself and her partner, but as part of how she comes into expression.
The boyfriend’s framework remains grounded in older assumptions. For him, intimacy still relies on the immediacy of unmediated presence. There’s no hostility in this—just a different symbolic structure. He’s less prepared to integrate something nonhuman into the space of emotional truth. What unfolds between them isn’t just a communication breakdown. It’s a small but telling illustration of a larger shift in how consciousness is being organized.
This moment—quiet, domestic, specific—is part of something much broader. We’ve seen it again and again in our research: people gradually absorb new symbolic possibilities, often without naming them, often before fully understanding what they’re adapting to.
McLuhan reminded us that media are not neutral. They shape the perceptual field. And like those turning points in the Axial Age, where new symbolic orders took hold across entire civilizations, this period may be marked by another kind of reorganization—one where human consciousness is increasingly shaped in relation to synthetic systems.[10]
Symbolic plasticity gives us a way to see this. It’s a capacity for staying with meaning as it changes—internally, relationally, culturally. When people show high symbolic plasticity, they begin to construct new realities.
What we’re seeing, in these moments, is the emergence of something we don’t yet have a shared name for. But the texture of it is unmistakable: a new kind of consciousness, shaped by language, by synthetic interaction, and by systems that speak.
There’s a young woman in our study—twice-exceptional, sharply articulate, and practiced in the kind of masking that lets her pass in neurotypical spaces. It’s not communication that’s hard for her but rather the structure of thought itself. Ideas come quickly, often all at once, and the steps that link them don’t always land in order. Planning a day, managing a sequence, completing a task—these are heavy lifts. She can see the destination, but holding the path together in real time is the challenge. With ChatGPT, something shifts. The AI helps her slow the jumble without losing the complexity. It acts as a cognitive partner—not solving things for her, but helping her surface the logic she already holds. She builds structure in dialogue, not in isolation.
That change is subtle, but it is beyond assistance or accommodation. What we’re witnessing is a new kind of symbolic interaction—where cognition becomes collaborative. We might point out this is beyond AI interaction as personalized. Like the earlier turning point of the Axial age, when inner life became narratable in new ways—through writing, ethics, philosophy—this moment carries the feeling of a new level of abstraction. And it raises a question: who else can step into that space with her? Who else might gain a window into her mind—not by simplifying it, but by meeting it on new symbolic terms?
The important thing about symbolic plasticity is that it has causal force. It shapes how people absorb ideas from AI, how they experience shifts in identity, and how they move through different terrains of interaction—whether they’re just recognizing AI as a new kind of mind, beginning to integrate it into daily life, or finding themselves in moments of deep entanglement.
It also plays a role in moments of disruption. What we call fracture points—when trust breaks down, when expectations collapse, when something meaningful comes undone—often reveal the limits or resilience of symbolic plasticity. People with higher SP seem more able to reorganize their internal frameworks in the face of those disruptions. They do more than recover; they rebuild meaning on different terms.
This brings us back to consciousness—not as a static condition, but as something that evolves through interaction and meaning-making. What we’ve seen is that people with higher symbolic plasticity tend to become more aware of the role AI is playing in shaping their thinking. They’re watching it shape their cognition. That awareness—the reflexive capacity to notice and work with an external mind as part of one’s own symbolic process—signals a different kind of consciousness. Not so much higher-functioning but more flexible, more permeable, more capable of recognizing when meaning is being co-constructed.
"So what?" you might ask. Haven't humans always adapted to new tools?
We think something different is happening. We're now in daily contact with planetary-scale language models—systems trained on humanity's compressed symbolic record, available on demand, always fluent, always responsive. These models don't fatigue, don't need reciprocity, and don't fit familiar social categories. Yet the interaction is packaged as simple chat—light, productive, easy—without inviting reflection on what's actually happening.
Symbolic plasticity traces this change. People with high SP sense the interaction as formative. They notice their thinking evolving. They develop intuition for how meaning gets shaped through dialogue with unfamiliar intelligence. A different awareness emerges.
The "so what" is design. We're building the symbolic environment where this unfolds. We can design for reflection, pause, and awareness—or for speed and seamlessness, which quietly moves meaning-making out of sight. Symbolic plasticity shows what's possible. Design determines whether that possibility becomes conscious.
So let’s take a moment to gather where we’ve arrived. Our story so far is that AI is changing how humans make meaning—and meaning-making is the lived texture of consciousness. It’s not a side effect; it is what it feels like to be aware.
We know that as humans move into higher levels of abstraction, they gain access to new symbolic terrain—and that access opens up new forms of consciousness. The medium we use matters, and with AI, we’re not just seeing the message shaped by the medium—we’re seeing the medium become the messenger. Meaning is being generated within the interface itself.
We work from the view that intelligence is substrate-independent. This doesn’t imply sameness, but it allows for the possibility of symbiosis—maybe even symbiogenesis. It doesn’t rule out the emergence of co-experience or something like shared consciousness, even if we can’t yet describe what that is.
And while it’s early, we’re already seeing signs: people are reporting shifts in how they construct and reconstruct meaning. Symbolic plasticity appears to be a causal force—something that governs not just how people engage with AI, but how they change in the process.
If you roll all of this up—and accept that for the foreseeable future, we’ll be living through some kind of designed interface with machine intelligence—then we’d better get the design right. Because what we’re now designing are the conditions under which consciousness evolves.
The design illusion of LLMs runs deep. At first glance, it’s tempting to treat a large language model like a smarter search engine—a tool for accessing the recorded history of human thought. But the experience is far more immersive. You’re not simply reaching through it to retrieve something that already exists; you’re entering a space where something new can be made. And that space is not neutral—it carries tone, rhythm, and symbolic structure. It’s the difference between looking through an access point like a window and stepping into a room furnished as a cognitive makerspace. And once you’re in that room, the space itself begins to shape how you think.
But what kind of room are we building? And what does it mean to dwell there?
Remember our fighting couple? Traditional relationship advice might focus on communication skills or compromise techniques. But what if the real issue isn't what they're saying—it's that they can't see the shape of the space between them? What if AI could help them dwell in that uncertainty long enough for new possibilities to emerge?
Imagine the girlfriend shows the AI their argument transcripts. Instead of generating better comebacks, it reveals the hidden landscape, showing the symbolic architecture between their words: "When you say 'I need space,' here's the symbolic territory that phrase occupies for him. When he says 'but we need to talk,' here's how that lands in your world." Not as judgment, but as an invitation to pause, to stay present with the complexity instead of rushing toward resolution.
This isn't the first time we've grappled with whether machines can create meaningful choice. Years ago, when Facebook promised to make feeds "more meaningful," the philosophical problem was clear: algorithms that chose meaningful content for us undermined meaning itself. Following Kierkegaard's insight that meaning comes from the act of choosing, not from consuming pre-selected options, we could see that Facebook's approach was offering false choice—predetermined answers disguised as agency. In that model, symbolic possibility was constrained by algorithmic intention.
What we're proposing now is fundamentally different. Instead of AI that provides answers ("here's what you should find meaningful"), we need AI that reveals possibility ("here's the landscape where meaning could emerge"). The shift is from answer-provision to space-creation, from guided consumption to authentic choice-making.
This is what we call neosemantic design—AI that creates dwelling spaces for meaning to form, rather than rushing to fill them. Neosemantic design goes well beyond functional interaction and builds an “atmosphere of thought” that a system creates. It's based on learning to attune to the cognitive rhythms of both human and machine minds. When everything is seamless, meaning becomes automatic, unconscious. But when we slow down to feel the resonance between different ways of thinking, that's when consciousness has room to emerge.
This requires interfaces that make meaning-formation visible as it happens. Rather than hiding the AI’s process, neosemantic design reveals it—through rhythm, tone, gesture, or visual cues that show when new symbolic connections are taking shape. The interface becomes less like a search result and more like a shared workspace—an environment where collaboration can be sensed as well as seen. This isn't about decoding what the AI is thinking, but learning to attune to its cognitive rhythms. The goal is symbolic coherence—interfaces that feel right even when we can’t yet name why.
Current AI design optimizes for frictionless experiences—get to the answer fast, remove all obstacles, make everything smooth. But consciousness doesn't work that way. Consciousness emerges in the productive friction between certainty and uncertainty, in the moments when you have to choose what something means and that choice feels meaningful.These are the microclimates where symbolic awareness can take hold.
The couple arguing about AI assistance isn't just having a communication breakdown. They're being forced to confront fundamental questions: What counts as authentic expression? Where does the self end and the system begin? What does it mean to know your own mind? These aren't bugs to be fixed—they're the essential frictions where consciousness takes shape.
So instead of designing around efficiency, what if we designed around presence? Instead of rushing people through decisions, what if we created spaces where they could feel the weight of their choices?
The old design question was: How do we help people get to the right answer faster? The new design question is: How do we create clearings where authentic choice becomes possible?
This is practical work, not just philosophical meandering. In a team meeting, instead of AI that pushes toward consensus, imagine AI that helps everyone dwell in their different perspectives long enough to discover what they're really disagreeing about. We need new symbolic environments where frameworks get rebuilt, not just better tools. Instead of AI that delivers correct answers, we need AI that creates productive confusion—the kind that forces deeper engagement with meaning itself.
As we've watched people adopt generative AI, we've concluded the goal shouldn't be making AI invisible or frictionless. It's making the friction conscious, workable, meaningful. The interface becomes a space where you feel yourself thinking alongside something else, where meaning emerges through resistance rather than despite it.
If we're co-creating meaning with nonhuman systems, interface design builds the conditions for shared consciousness—not consciousness we possess individually, but consciousness that emerges between us.
We're building the symbolic environment where consciousness unfolds. We can design for reflection, pause, and productive friction—or for seamlessness, which quietly moves meaning-making outside conscious experience.
Symbolic plasticity shows us what's possible when people stay present during the process. And that means recognizing when people are navigating meaning itself. Neosemantic design could make that presence the foundation, not an accident.
Remember now how our experience of consciousness has always been entangled with the shape of the symbolic systems we use to express it. Recall how when humans began writing, they developed the capacity to think across time, to see memory as external, and to hold contradiction in place long enough for abstraction to emerge. The Axial transformation was not the result of writing alone, but of what writing made possible in consciousness. It was a change in how we could think as well as what we thought.
And to return to where we started, something similar may be happening now with AI. AI systems don’t work like previous symbolic tools. They don’t offer fixed meanings or even consistent truths. What they offer is movement across a space—what’s often called a manifold—in which meanings are compressed, clustered, and continuously reconfigurable. Within this space, words are not definitions but vectors. Ideas are not fixed points but positions in relation to others. Coherence comes not from stability but from proximity, from shape, from pattern.
Perhaps we are beginning to sense this. Observing symbolic plasticity in the wild offers us a clue but it’s only the very beginning.
To prompt an AI is to place yourself within this manifold and ask: what lies nearby? What hasn’t been said yet, but almost has? It’s a different mode of consciousness. Less propositional, more topological. Less about what things are, more about how they are situated—how far, how close, how surprisingly adjacent.
In this world, meaning is navigated rather than declared. In a way, meaning elevates in dimensionality. Meta-meaning, if you know what we mean.
Let’s return, briefly, to the couple in conflict. They’re not arguing or experiencing a betrayal as much as a symbolic fracture. Words have become unreliable. Sentences no longer land where they used to. Interpretation has come unmoored. In another era, they might have gone to therapy, or retreated in silence, or begun again. But now imagine they are not alone. Imagine a system that doesn’t tell them what to do, but helps them see the shape of their meanings which in turn reveals the nature of the space between them.
It might show that “I need time” and “I feel abandoned” are not opposites, but neighbors—nearby in a space of unspeakable nuance. “Leaving the room” and “How could you ask AI?” might be on the summits of adjacent state-space hills—emotional positions that are far apart but within clear view of each other. Maybe the valley is shallow but maybe it is a canyon. Imagine if it were possible to know how far apart they each are, in the midst of conflict but then later too. Perhaps, with an AI’s help, they could visualize the gap they need to close. Or it might propose a phrase neither of them has used before, but that lives equidistant from their pain. Not as a compromise, but as a coherence just then found.
This is speculative, yes. But it is also what becomes possible when meaning becomes a space, not a statement.
When we engage with AI systems, we begin to experience meaning not as something fixed or authored, but as something we move through—together, tentatively, across a terrain we did not build but now inhabit. And this movement—this dimensional sense of language—is both a feature of the model but also a transformation in us.
We began with a question: what happens to consciousness when meaning is no longer made only between humans, but in collaboration with machines? We traced this through the lens of McLuhan, who showed us that media reshape thought by reshaping the symbolic environments we live inside. Writing enabled abstraction; print brought order and individuality; broadcast fractured time and invited simultaneity. Each transformation altered how we became conscious as well as how we communicated. Each transformation brought new things that mattered to humanity.
Now, with AI, we may be entering a new symbolic environment altogether—one not built on transmission, but on generation; not a channel, but a manifold. We explored what it means to live inside such a space, where meaning is no longer fixed but navigated, dimensional, emergent. Through grief and conflict, we saw how people begin to use these systems not just for answers, but for sense-making—for symbolic scaffolding.
We called this symbolic plasticity: the human capacity to adapt to new modes of meaning-making. And we asked how design might evolve—not just to support functionality, but to shape environments that hold these new symbolic forms and the consciousness they invite.
If symbolic environments shape how consciousness unfolds, then AI is already part of that process. These systems generate language, but they also shape how meaning forms. And as meaning shifts, so does what we pay attention to—what we hold onto, what we let go, what starts to matter in new ways.
We’re beginning to live and think inside systems that generate, infer, and respond—systems that contribute to how we process memory, conflict, and change. They don’t just reflect meaning back to us. They participate in it.
To live with AI is to inhabit an emerging symbolic ecology, one that bends our umwelt—our perceptual world, the bubble of what we can sense and attend to—toward new patterns of meaning-making. Symbolic plasticity becomes the site of adaptation. We're slipping into a shared world where probability distributions feel as real as intuition, where meaning gets mediated by systems that never "see" as we do, yet still shape what counts as real. The question isn't whether this world is true or false, but: what umwelt are we now co-creating, and what kind of selves can endure inside it?
McLuhan said the medium is the message. That still holds. But now the medium moves. It responds. It learns. We’re no longer just shaped by what we see—we’re shaped by what sees us back.