Why AI Requires Universities to Do What They Were Built For

The skills that can be taught quickly can be automated quickly. What's left is the slow work universities were built for: helping people become who they are.

Why AI Requires Universities to Do What They Were Built For
Exploring in Alaska. Photo credit, Lucy Waghorn

In 2025, we introduced the AI Collaboration Cube—a framework for understanding how people relate to AI based on three psychological orientations: Cognitive Permeability (how far AI extends into your reasoning), Identity Coupling (how closely your sense of self connects to your AI-assisted work), and Symbolic Plasticity (your capacity to recognize that meaning changes with context).

We mapped eight roles people occupy, from Doer (lower left—AI handles tasks, you remain separate) to Co-Author (upper right—genuine amplification, AI woven into thinking while you maintain authorship).

This article goes deeper into identity. Why professionals resist coupling their identity to AI. Why students are prone to configurations that feel stable but undermine development. And why universities—institutions built around identity formation—are exactly where this gets addressed.


Professionals Resist

We analyzed 2,500 stories of human experiences with AI. Scientists, creatives, technical specialists, caregivers, knowledge workers across domains. A pattern emerged that matters for universities: professionals who use AI most powerfully tend to resist coupling their identity to it.

A clinical coder uses AI constantly. The AI gets most codes wrong but this is beside the point. The coder uses AI conversations to organize thinking about medical cases. "AI helps me to converse with my own thoughts." When AI suggests a code, the coder corrects it and works through the right reasoning.

The key phrase: "I am the professional in the situation and the AI is my tool for working things out."

Note the use of the present tense. This is identity-defining language. The coder knows who they are and AI serves that identity. It doesn't constitute it.

A social services worker uses AI heavily—summarizing shift logs, prioritizing events, organizing tasks. But: "I do enjoy writing the basis for all of my logs independently. I want my original interaction to hold true when I log notes."

And: "I am a bit anxious to just let AI do all of the thinking for me."

The anxiety is identity protecting in itself. The practice—writing the original basis first—keeps authorship where it belongs, with the human.

A theoretical physicist uses AI as "a brainstorming partner that can come up with ideas at a much quicker rate than I individually can search for and implement." AI expands the search space beyond human capacity. But: "I think of its use more as a way to know what to search for and start implementing one-by-one, after I have manually verified its validity."

AI proposes while a human verifies. The physicist's expertise determines what matters. The physicist remains a physicist who happens to use AI.

A programmer describes deep integration: "Using AI for tasks feels like me being me to begin with. It feels like an extension of self." Same breath: "I don't 'trust' AI at all. In fact, my distrust level is pretty high."

Deep integration and particularly fierce identity protection. Both are true at once.

These professionals formed identity before AI arrived. Years of training, practice, struggle, accountability to standards outside any AI relationship. When AI entered their work, they had something already there. They could configure AI around a settled sense of who they are.

This is the world universities need to prepare students for.


Why Students Drift Differently

Students don't have settled identity. They're forming it. Building capability at the same time AI is available to substitute for both.

This makes students prone to drifting into AI configurations that feel stable—arrangements where AI does cognitive work and the student attaches their developing sense of self to the outputs. High Identity Coupling without the settled identity underneath.

It feels productive. Assignments get done. The student feels like a researcher, a writer, an analyst. Maybe a cheat sometimes. But the developmental work underneath isn't happening. The identity is coupling to outputs rather than to capability.

A student writing their first research paper is supposed to be developing that capacity through the assignment's struggle. When AI handles the struggle, the development doesn't occur. And the student may not notice—they have no prior experience of what developing that capacity feels like.

The professionals in our research can feel when something is off. The social services worker notices anxiety about letting AI do the thinking. An ESL tutor notices: "I started to feel like I was forgetting how to write those paragraphs for myself!" They have reference points. They know what their capabilities feel like from inside.

Students are building those reference points. Or are supposed to be.


You Can't Become an Expert Through AI

The professionals know something students are still learning, that expertise requires tacit knowledge that can't be outsourced.

A microbiologist: "I worked with one bacterial strain where you had to initiate various steps of the isolation when the cells/cell lysate reached specific colors. The differences in color have to be seen to be understood and is seldom written down anywhere so an AI tool would not understand the difference to look for."

That knowledge lives in accumulated experience. Pattern recognition built through repetition. No AI can give it to you. You develop it yourself.

A dietary supplement researcher: "I would like to use AI tools in data analysis, but I'm unsure of doing so. A lot of the data analysis that I do is complex and nuanced, but where I reach my limit I usually rely on a specialist—a biostatistician—to fill in the gaps. Because I don't know what the biostatistician sees or the tools they use to analyze the information well enough, I can't fact-check an AI that was analyzing my data to make sure it was correct."

Without expertise, you can't verify the output. Without verifying the output, AI gives you answers you can't trust.

A professional responsible for signing off on technical work: "My role is needing to be able to sign off. So I'm quite cautious about things being black box." And: "So what the difference is going to be—as a certified professional signing off—is: do you actually understand what it's doing or not? You do need to know it, otherwise you might not have a job."

The professional who takes responsibility has to actually understand. That understanding comes from development that can't be shortcut.

There's a popular narrative that AI will make expertise obsolete—that anyone can do anything with the right prompts, that deep knowledge no longer matters when AI has all the answers. This misses something fundamental. AI can retrieve and recombine codified knowledge. It cannot give you the judgment to know when that knowledge applies, the pattern recognition built through years of practice, or the capacity to notice when something is off.

What AI can do is accelerate the cycle between exploration and exploitation—between discovering new territory and building mastery within it. AI can help you explore faster, surface possibilities you wouldn't have found alone, compress the search for what's worth learning. But the learning itself, the development of tacit knowledge, still requires doing the work. The explore phase can speed up. The exploit phase—building genuine expertise—still demands repetition, struggle, and time.

Students who skip developmental work miss becoming the kind of person who could use AI powerfully. Powerful AI use requires expertise to direct it, verify it, know when it's wrong. That expertise comes from the struggle students are tempted to skip. AI can help you find what's worth mastering faster. It can't do the mastering for you.


The Path Requires Self-Knowledge

The cube maps a journey. Doer sits in the lower left—AI handles tasks, identity stays separate, meanings stay fixed. Co-Author sits in the upper right—AI woven into thinking, clear sight of what role AI plays, identity that holds authorship through collaboration.

Getting from Doer to Co-Author requires development along all three dimensions. Higher Cognitive Permeability—letting AI into your thinking process. Higher Symbolic Plasticity—seeing clearly how meaning changes when AI participates. And Identity Coupling that works for you rather than against you.

That last one is the trap for students. Identity Coupling isn't bad. The professionals have it too—their work matters to who they are. The difference is what identity couples to. Professionals couple to capability, to expertise, to standards they can verify. Students without settled identity are prone to coupling to outputs—to what gets produced rather than what gets developed.

Moving along the cube deliberately requires knowing where you are. Knowing what you're building toward. Having enough settled sense of self to configure AI around it rather than having AI configure you.

This is self-knowledge. Universities have always claimed to develop it.


What Universities Have Always Claimed

Universities have always claimed identity formation as central to their mission. Helping young people become thinkers, professionals, citizens. Become people who know who they are and what they can contribute.

AI hasn't disrupted this mission. AI has made it urgent in new territory.

Students will use AI. They're using it now—forming identity with AI woven through the process. But will they develop the self-knowledge to handle the formation deliberately? Whether they build the capability that lets them resist problematic Identity Coupling. Whether they become the kind of professional who can say "I am the professional in the situation and AI is my tool."

That requires what universities provide: a structured environment for identity formation with human guides who witness and mentor their becoming.

Students can't develop self-knowledge alone. They need frameworks that make visible what's happening in their AI relationships. They need someone asking: Where are you on the cube? What's coupling to what? Is identity attaching to capability or just to outputs?

Students can't build capability alone—not the tacit kind that lets you verify AI, direct it, know when it's wrong. That requires struggle with the work itself. Repetition. Practice. Feedback from humans who have the expertise students are building toward.

Students can't form professional identity alone. Identity forms in relationship. With mentors who model what expertise looks like. With peers who push back. With communities that hold standards independent of any AI system.

This is what universities do. What they've always said they do.


The Course

Any course that takes this seriously would cover:

Self-knowledge first. Students learn to locate themselves on the cube. They examine their actual AI use: Where does AI enter my thinking? What is my sense of self attaching to? Can I tell the difference between "I did this" and "AI did this"?

Building toward expertise. Students learn which developmental work can't be outsourced. They practice the struggle that builds tacit knowledge. They experience what capability feels like from inside—so they have reference points when something feels off later.

Deliberate configuration. Students practice putting AI in different roles for different purposes. Building foundational capability: keep AI out, do the work yourself. Expanding into new territory: let AI propose, verify before adopting. Producing knowledge work with capability already built: collaborate throughout.

Identity that holds. Students develop professional identity grounded in discipline and capability—identity that can configure AI rather than being configured by it. They practice saying "I am the professional in this situation" and meaning it.

Human accountability. Students work through problems with humans, not just AI. They explain their reasoning to peers who push back. They learn from mentors who have the expertise they're building toward. They stay embedded in relationships where their development gets witnessed.


Back to Roots

This is a call for universities to return to what they've always claimed.

The professionals in our research navigate AI from settled identity. They know who they are. They built that through years of training, practice, struggle, and accountability. Now they can use AI powerfully because they have something underneath—capability that lets them verify, expertise that lets them direct, identity that holds authorship.

Students need to build that foundation. Universities are where it happens.

The course we've described isn't new territory for higher education. It's old territory made urgent. Identity formation. Capability development. Self-knowledge. Human accountability. The work universities have always said they do.

AI just made it matter in a new way.


Helen is a Commissioner for Higher Education for the State of Oregon. Opinions are hers.

This article draws on forthcoming research on human authorship and AI. Subscribe to our newsletter for updates when the full work is published.

We have the research frameworks. We're looking for institutional partners and funders to help turn them into curriculum. If you're interested in building this with us, reach out: hello@artificialityinstitute.org.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Journal by the Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.