How to Think About the AI Jobs Data Right Now

Studies on AI and jobs is confusing. Here's how we make sense of it

How to Think About the AI Jobs Data Right Now
Artificiality Summit 2025. Credit: Rob Kerr

Several times a week someone sends me a chart showing how "exposed" their job is to AI. Treemaps of the labor market colored by risk level and radar charts showing AI capability pressing outward toward the edges of human work. They are eye-catching and they get shared widely. They almost always produce anxiety rather than insight.

A lot was published in March on AI and the labor market. Here's how we are thinking about the studies and data right now.

The data is clearer than you think — and also less useful than you'd hope

The best empirical research available — from the St. Louis Fed, the Federal Reserve Board, and the Yale Budget Lab — shows no evidence of net job losses at the industry level, even in sectors with the highest AI adoption. Forty-three percent of U.S. workers now use AI for their jobs. Productivity is rising in high-adoption industries. Fewer than 100,000 of the 1.2 million layoffs tracked in 2025 were primarily attributable to AI. The Yale Budget Lab, which has been tracking the occupational mix month by month since ChatGPT's launch, says the rate of change in the labor market is not meaningfully different from previous periods of technological disruption.

But the lived experience is full of contradiction. People are losing roles that don't get backfilled. Others are doing twice the output with AI handling what a junior used to do. Others are working harder than ever because expectations have reset upward. All of these are happening at the same time, which is why the aggregate picture is so difficult to interpret.

There is one concrete, occupation-specific signal we pay attention to. Erik Brynjolfsson's team at the Stanford Digital Economy Lab, working with ADP payroll data covering millions of workers, found that overall employment in AI-exposed occupations is up — but employment for 22- to 25-year-olds in those same occupations fell 6% between late 2022 and mid-2025. Software engineering and customer support were hit hardest. Entry level is where the pressure is appearing first. We worry about this and it's possible that they really are "canaries in the coal mine" exactly as the paper is titled.

The problem is that standard economic measurement tracks whether people have jobs, not what those jobs contain. When a team of five becomes a team of three producing the same output with AI assistance, that shows up as a layoff, not an automation event. Same as when a company freezes hiring and asks existing staff to absorb the work using AI tools, that shows up as nothing — no displacement, no efficiency gain, just people working harder. When someone keeps their title but half their former responsibilities have been absorbed by a model, the occupation statistics don't move. Changes are happening at the task level, inside roles, and the measurement infrastructure was not built for that.

The Yale Budget Lab's Martha Gimbel has been direct about how companies are using AI as a convenient explanation for cuts that have other causes. She points out that no CEO is going to tell investors they mismanaged the pandemic overhiring cycle. Of course they are going to say they're rightsizing the company to invest in AI and win the future. That narrative rewards the stock price. It also pollutes the data, making it harder to distinguish real AI-driven displacement from ordinary business cycle layoffs.

So we have confusion:

  • The people predicting mass displacement have no supporting data.
  • The people saying everything is fine have early data on their side, but they are measuring with instruments designed for a different kind of economic change.
  • Interest rate effects, post-pandemic overhiring corrections, and strategic AI-washing are all generating noise that makes the signal almost impossible to isolate.
  • Everyone knows someone who has had their job eliminated or their income reduced because of AI.

The historical pattern, where new tasks and roles emerge to replace displaced ones, may still hold. Or the speed and breadth of AI capability improvement may break it. Anyone claiming certainty in either direction is outrunning the evidence.

Exposure still doesn't mean impact

This part hasn't changed since we first wrote about it, but it bears repeating because the charts keep circulating without the context. When a study says your job has high "AI exposure," it means some of your tasks overlap with things AI can currently do. It does not mean your job disappears, gets worse, or that fewer people will be hired for it.

Anthropic's own Economic Index — the most data-driven effort to measure this — found no statistically meaningful change in employment outcomes across occupations with the highest AI exposure since ChatGPT launched. Computer programmers show 74.5% observed AI exposure. They are not being laid off at scale but are writing more code, faster, with AI assistance. The Yale Budget Lab tested this from a different angle, tracking the share of workers in high-exposure occupation quintiles over time. That share has been flat. If AI were automating jobs out of existence, you would see it moving.

Dave and I have a problem with the radar charts, in particular. The mental model behind the exposure charts is that human capabilities are a fixed target and AI is a rising tide approaching it. But human work is not a fixed target. When you automate part of a job, the remaining human parts often become more valuable and expand. The history of automation is full of this. Canonically, ATMs didn't reduce bank teller employment because branches became cheaper to operate, so banks opened more of them, and the tellers shifted to relationship work that machines couldn't do.

That pattern may hold again. It may not. The honest answer is that we are in the middle of finding out.

LLMs changed what's at risk

For decades, automation targeted structured, predictable, physical tasks. The mental model was Moravec's Paradox — what's easy for humans is hard for machines, and vice versa. The policy response was built around that: retrain factory workers, invest in knowledge economy skills.

Recently, Arvind Narayanan has pointed out that Moravec's paradox may be a measurement error. The paradox was never empirically tested. It persists because AI researchers focus on problems that are interesting to them — tasks that are easy for humans but hard for AI (like walking), and tasks that are hard for humans but easy for AI (like search). They ignore the vast number of tasks that are easy for both, and the vast number that are hard for both, because neither category is interesting research. Remove those two quadrants and of course what's left looks like a clean inverse relationship.

The point is that "reasoning" in closed domains like chess is a completely different problem from reasoning in open-ended domains like law or medicine, and the AI field has spent decades conflating the two. The evolutionary story — that abstract thought is a thin veneer on ancient sensorimotor skills and therefore easy to replicate — doesn't hold up either. Reasoning in the real world depends on exactly the common-sense knowledge and situated judgment that the paradox classifies as "easy."

This matters because the old automation playbook was built on Moravec's paradox. For decades, the assumption was that automation would keep targeting structured, predictable, physical tasks. Large language models arrived into a world that had organized its entire labor strategy around a distinction that may have been a selection artifact. LLMs do the knowledge work — writing, analysis, coding, legal research, customer communication — that was supposed to be the safe harbor. The jobs the old studies said were at low risk are the ones now showing the highest AI exposure.

LLMs also introduced a failure mode that previous automation didn't have. You know this one: they are often confidently wrong. A robot welds the joint or it doesn't. An LLM gives you a well-structured legal brief that cites cases that don't exist. The practical result is that for most professional work, the human's job is shifting from doing the task to evaluating whether the AI did the task correctly which is a different skill altogether.

And the AI industry's entire research trajectory is oriented toward eliminating the need for that human check — that is what "AGI" means in practice. When the major labs talk about progress toward AGI, what they are measuring is how close they are to output that doesn't require a person to verify it.

This makes "exposure" a particularly misleading metric. The question that matters for your own career is: can AI do my tasks reliably enough that someone would remove me from the loop? For most professional work, the answer is still no. But —and this is a hard question to answer — the follow-up question is: how much of my current value comes from doing tasks versus from the judgment, context, and relationships that surround them?

What still holds

Across every major automation study — Oxford (2013), McKinsey (2017), Anthropic (2025–26), ours (2016)— one finding has been consistent. Jobs involving high unpredictability are where humans stay ahead. When the environment is messy, when people are unpredictable, when the data is ambiguous and the situation is evolving, humans outperform machines. Creative problem-solving in novel situations. Assessing what people don't say. Making judgment calls with incomplete information. Managing a crisis that doesn't fit the manual.

LLMs haven't changed this. They struggle with genuine novelty, with knowing what they don't know, with aesthetics, with contested success, and with situated judgment. These are all high-effort, high-expertise practices with no objectively verifiable answer.

Brynjolfsson's data shows that employment is falling among workers who use AI to automate their existing tasks and growing for those who use it to learn new skills. The distinction matters because automation-only use treats AI as a replacement for effort. Learning use treats it as a way to expand what you're capable of. The workers gaining ground are the ones using AI to move into adjacent skills and harder problems — exactly the kind of unpredictable, judgment-heavy territory that remains durable.

The exposure charts don't capture any of this. A lawyer might score as highly exposed because AI can draft briefs and do research. But the actual work that makes a lawyer valuable — intuiting what the client might agree to, knowing when the case law doesn't quite apply, making strategic decisions under uncertainty — is deeply unpredictable and nowhere near automatable. The chart's color doesn't tell you that.

Is productivity changing?

There is one more piece of the picture worth watching. Brynjolfsson's analysis suggests U.S. productivity jumped roughly 2.7% in 2025 — nearly double the 1.4% annual average of the past decade. The economy added far fewer jobs than initially reported while continuing to grow, which implies each worker is producing more. If this holds, it would mark the beginning of what Brynjolfsson calls the transition from AI's investment phase to its harvest phase along a J-curve.

This is potentially very good news at the macro level. Productivity growth is the primary driver of rising living standards over time. Higher productivity makes it a whole lot easier to solve societal problems. But productivity gains don't distribute themselves evenly. An economy that produces more with fewer entry-level workers is an economy with a distribution problem, even if the aggregate numbers look healthy. The St. Louis Fed data showing higher productivity in high-adoption industries, combined with Brynjolfsson's findings on entry-level displacement, suggests this is already emerging.

What to ask yourself

If you're reading one of these exposure charts and feeling a spike of anxiety, here's where we suggest you put your attention.

What parts of my work involve genuine unpredictability? The messy, ambiguous, human parts — those are your most durable assets. If you've been treating them as the annoying overhead around your "real" work, reconsider. They may be the most important work now. In practical terms, spend more time building your negotiation skills to handle unpredictable humans.

Can I evaluate AI output in my domain? This is rapidly becoming a core professional skill. Knowing when a confident, well-structured answer is wrong requires deep domain knowledge. The irony is that the expertise you need to check AI is the same expertise people assume AI is replacing. Practically, this means ensuring you use AI to extend your expertise (top right of the cube) not automating the "hard things".

Am I using AI to automate or to learn? Stanford's payroll data shows these produce opposite employment outcomes. If you're using AI to do your existing job faster, you're making yourself easier to replace. If you're using it to take on work you couldn't do before, you're compounding your value. In practice, building your AI skills for high cognitive sovereignty not cognitive surrender.

Am I building skills that compound, or skills that flatten? Structured knowledge tasks — the kind where there's a right answer and a known method — are where AI is strongest. Judgment, taste, the ability to operate in situations where the template doesn't apply — those compound over a career. AI can't replicate them because they depend on experience that can't be reduced to training data. In practice, focus on areas where you can build multi-domain expertise — places where the knowledge from one field changes how you think about another, and where getting better means integrating more, not just knowing more.

Where is the task composition of my role actually shifting? Forget the exposure chart. Look at your own work over the past year. Which tasks take less time than they used to? Which ones are you doing more of? That pattern tells you more about your specific trajectory than any aggregate study. In practice, this means the tasks that got faster are probably the ones AI will absorb first — and the ones that grew are probably your irreducible core. Invest there.

Am I learning to use AI well, or just learning to use it more? There's a difference. The tools are designed to increase usage. Using them well means knowing when they help and when they don't, and that requires exactly the kind of judgment the tools themselves can't provide. In practice, this means treating AI as a power tool, not a crutch — if you can't evaluate the output without the tool, you don't understand the work well enough to direct it.

The aggregate data says we are not in a jobs crisis. The task-level reality says the ground is shifting under a lot of professional work in ways the data can't yet measure. Both of these things are true.

The useful response is to pay less attention to the exposure charts and more attention to what is actually changing in your own work — and to make sure the parts of you that AI can't replicate are the parts you're investing in.

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to Artificiality Journal by the Artificiality Institute.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.