If you've been watching the video series, here are five short pieces that lay out the economics. I should say clearly that I'm not an economist. I've spent a decade studying how people work with AI, and along the way I've had to teach myself the economics underneath it—the most helpful are here.
If you've been watching the video series, here are five short pieces that lay out the economics. I should say clearly that I'm not an economist. I've spent a decade studying how people work with AI, and along the way I've had to teach myself the economics underneath it—the most helpful are here.
Three economists at the University of Toronto—Ajay Agrawal, Joshua Gans, and Avi Goldfarb—have an easy way to understand what AI does to the economy. AI is a prediction machine. It predicts the next word, the next pixel, the most likely diagnosis and turns more and more problems into prediction problems. It makes prediction cheap.
When something gets cheap, the things you need alongside it get more valuable. When coffee gets cheap, cream and sugar get more expensive. So more people drink coffee, and demand for the complements rises. Prediction got cheap. The complements are action and judgment—how much more you can do now you have more predictions, what to do them, when to override one, how to weigh clean predictions against context the machine doesn't have.
This logic has held across every automation wave for two centuries, including the deep learning boom in the mid-twenty teens. Five ideas explain why.
William Baumol showed that judgment can't be made cheaper. A string quartet still takes four people and forty minutes to play Beethoven. When everything around it gets more productive, the thing that resists productivity gains gets more expensive. Your hairstylist takes the same time as fifty years ago but it costs ten times more. This is because their wages have to rise as everything else around them has gone up. It’s one reason why housing, healthcare, and education have gotten more expensive (although obviously not the only one). These sectors have resisted productivity gains.
Chad Jones at Stanford showed that a system is only as productive as its least productive essential part. Automate everything about a flight and the bottleneck moves to the ground crew turning the plane around. Even infinite AI productivity in software—literally infinite—adds about 2 percent to GDP because software is 2 percent of the economy. AI will change that number. Software's share of the economy is growing, and its leverage grows with it. But bottlenecks matter and we can be sure they haven't gone away. In fact, there is reason to suspect that AI will find more bottlenecks, faster than it can remove them. What removes a bottleneck? Humans who understand which ones matter most to the problem at hand and are empowered to make choices.
David Autor—whose work is some of the most important in labor automation—showed that the same technology produces opposite outcomes depending on what it automates. GPS took the expert part of taxi driving—the knowledge, the thing that took years to learn. Wages collapsed. Computers took the routine part of accounting—the bookkeeping, the data entry. Wages rose. Whether AI takes your judgment or your drudge work determines which direction you're heading.
And when prediction gets cheap, two things happen. You use more of it—that's Jevons Paradox. When houses got more energy efficient, people heated them more. When prediction gets cheaper, people run more predictions. Total demand goes up, not down.
Jevons has a limit though. When the output is commodity — interchangeable, generic, good enough, demand doesn't expand. It saturates. There are only so many social media graphics anyone needs. David Autor has been flagging this since ChatGPT launched. Illustrators, translators, copywriters, medical transcriptionists—when technology floods a market with output that used to require skill, and the market didn't actually want more of it, wages collapse. His concern is these people will be unlikely to find work that pays as well, because the expertise they spent years building is the thing that got automated. That's happening now across creative fields as the cost of producing a passable artifact drops toward zero.
There are only so many product descriptions anyone needs. Just like there are only so many stock images people will look at. AI made those things nearly free and demand hit a ceiling. That's what's happening to some jobs, like language translators, right now. The work that was always commodity got priced like commodity. The work that required judgment—the eye, the taste, the read on what a specific client actually needs—still follows Jevons. More demand, not less. The split between those two is the question you need to answer about your own work.
But cheap prediction also does something else. You also attempt things you never would have tried. Problems too local, too specific, too costly to justify solving before. Nobody was going to build software for a food bank coordinator's donation mismatch. The market was too small. That changes when the cost drops far enough. And this is the part of the AI story is under discussed. Right now, the entire conversation is about what AI takes away. The expansion of what becomes possible—new problems, new work, new value that didn't exist before—barely gets mentioned.
Five ideas relying on one mechanism—cream and sugar. The complement gets more valuable. This has held for two centuries. There are reasons it might not feel like it right now. The question is why the world doesn't look like what the theory predicts. If judgment is getting more valuable, why are companies cutting the people who have it? Why is the conversation all about efficiency and not about innovation? About replacement not about enhancement? Is something working against the economics?
🟦 Intermediate: The System Is Fighting the Theory
Prediction got cheap and actions and judgments are the complement. The theory says these get more valuable but, right now, in some important places, the system isn't rewarding it.
The US tax code charges companies more for employing a human than for buying software. Payroll taxes, benefits, insurance—weighed against software depreciation. Every deployment decision tilts toward prediction and away from judgment. Daron Acemoglu, who won the Nobel Prize for this work, has been making this point for years. The structure systematically favors replacement over augmentation.
The quarterly earnings cycle makes it worse. When CEOs cut fifty jobs and replace them with AI they get a clear win: cost savings which are visible on the earnings call and the benefit of being “AI-forward.” Right now, in the current business environment, this strategy just can’t compete with the alternative which may be to invest in tools that make fifty people 30 percent better at judgment calls and actions that drive future growth. Judgment is hard to measure and takes quarters to show results. The measurable thing wins and the valuable thing loses.
The major AI companies have invested billions in data centers and that money needs a return. The math only works at massive labor replacement. This is literally the business case. When a CEO predicts half the workforce will be automated, that's a pitch to investors.
Entry-level hiring is down 35 percent between January 2023 and June 2025 according to Molly Kinder at Brookings. Those roles look like cheap labor but they're where judgment gets built. A junior lawyer isn't producing legal research as much as she's learning what matters in a case and what doesn't. Cut the role and in five years you have AI operators and no experts. Farmers have a phrase for this; it's called "eating your seed corn."
Whether AI creates or destroys jobs in a given field depends on a basic economic property: whether demand for that work is elastic or inelastic. James Bessen has tracked this across multiple waves of automation. When ATMs reduced the cost of running a bank branch, banks opened more branches, and teller employment kept growing for twenty years. Radiology is following the same path. AI has outperformed radiologists on diagnostic benchmarks since the mid-2010s, yet vacancy rates are at all-time highs and compensation is up almost 50 percent since 2015. Only about a third of a radiologist's time goes to reading images directly. The rest is triage, communication, clinical judgment. Cheaper image reading expanded total demand for imaging rather than cutting headcount.
Routine administrative work runs the opposite direction. A company needs roughly the same amount of ticket processing and document handling regardless of how cheap it gets. Making it faster doesn't make anyone want more of it, so automation cuts headcount directly. UK data from the Centre for British Progress confirms the split: since ChatGPT launched, IT analysts grew 38 percent while call centre workers contracted 19 percent. Same AI exposure, opposite outcomes.
The most granular evidence comes from US Federal Reserve research showing coder employment growing about 3 percentage points per year slower than the industries employing them. Coding jobs are reorganizing. The composition of software work is moving toward design, architecture, and security, and away from direct implementation. Each developer is producing more and firms are hiring fewer per unit of output. Both countries saw R&D tax changes over the same period that complicate clean attribution, but the pattern is consistent and confirms that work is already changing inside occupations, not just between them.
Other recent work helps explain another, more nuanced reason, why some jobs survive this pressure—cross-task spillover. The variable isn't how many tasks AI can perform but how costly it is to separate judgment from execution within a single role. When a field scientist's analysis depends on having been in the field, and her fieldwork depends on having done the analysis, those tasks are entangled. The job holds because pulling them apart destroys the thing that made them valuable. When tasks are separable, the job dissolves.
The economics from Green is correct but it does seem the labor market data and the lived experience don't always match. The macro numbers show remarkable stability—employment levels across AI-exposed occupations have barely moved since ChatGPT launched. But people keep watching friends get laid off and the jobs don't seem to come back.
There are explanations that have nothing to do with AI. Interest rates rose sharply and companies that had over-hired during the pandemic started correcting. Global uncertainty—wars, trade policy, supply chain disruption—makes every CFO cautious about headcount.
The layoff announcements increasingly mention AI as the reason and the roles that disappear are disproportionately the ones AI can approximate. We just don't yet know how much of what we're seeing is cyclical and how much is structural.
The economics says judgment is the valuable complement. The system is structured to ignore it. The question is why. The answer is structural—something about the nature of judgment that resists the measurement tools the system depends on.
◆ Expert: The Valuable Thing Is Invisible
Prediction operates on what has been recorded while judgment operates on what hasn't.
Michael Polanyi called it tacit knowledge—the things you know but cannot articulate. If knowledge was never written down, it was never recorded. If it was never recorded, it doesn't exist in any training dataset. This isn't a capability gap that closes with better models. The data isn't there because it was never there.
We know more than we can tell. That was Polanyi's insight. Ask any experienced nurse and they will tell you how they can know a patient is declining before the vitals change. Thirty years built that and they can't explain the mechanism. Or a field scientist whose analytical work improves because she was physically present across seasons, and whose fieldwork improves because she has done the analysis. Crossover tasks. Separate them and the outputs look similar for a while. But the knowledge that made them accurate was built across both tasks over years, and when you pull the tasks apart that knowledge stops accumulating. The decline is just slow enough that nobody measures it until it's gone.
Organizations measure what they can count. But Goodhart's Law says when a measure becomes a target it stops being a good measure. And here's the conundrum—you need to measure work to manage it. But the act of measuring pushes attention toward the measurable parts and away from the parts that matter most. You can't fix it by measuring harder because judgment, timing, trust—these resist measurement by nature. So the better your dashboards get, the more invisible the important work becomes. AI inherits this problem directly because it learns from the same recorded data. The blind spot is baked in at both levels.
The hardest judgment is counterfactual. The crisis that didn't happen or the client who didn't leave. A strategy pivot made before the data caught up is one of the most valuable things that happens inside a company. And nobody can prove it mattered because the crisis it prevented never arrived. Prevention requires reasoning about what would have happened otherwise. AI learns from what occurred while prevention depends on imagining what didn't.
There is a further distinction. Prediction searches within known territory. Judgment can expand it. Stuart Kauffman's concept of the adjacent possible describes the set of genuinely new things that could exist given what exists right now. Reaching it requires someone to see a connection that hasn't been made, drop a constraint everyone assumed was fixed, repurpose something for a use nobody intended. That is a different cognitive act than prediction, regardless of how sophisticated the prediction becomes.
Full automation of jobs is not inevitable. Acemoglu, Autor, and Johnson published a paper in early 2026 that lays this out. Technology can change work in five ways: it can augment labor, augment capital, automate tasks, level expertise across workers, or create entirely new tasks that didn't exist before. Only the last of these is unambiguously good for workers because it generates demand for new human expertise rather than commodifying existing expertise. The current AI deployment conversation is overwhelmingly about one of the five—automating. The investment thesis requires it and the quarterly earnings cycle rewards it. So the other four get treated as afterthoughts, when they're where most of the long-term value for workers actually is.
Prediction got cheap and judgment is the complement but the system can't see it, let alone value it. We know that human judgment resists the decomposition that measurement requires. The economics says judgment matters more but the system is fighting it. The reason is structural—the valuable thing resists measurement. What remains is the hardest question. The theory has held for two centuries. "Held" means "eventually." But eventually can be a long time.
◆◆ Expert Only: The Theory Is Right and People Can Still Get Hurt
Every example of automation eventually working out is true. The industrial revolution created more jobs than it destroyed, electricity rebuilt the factory. And the internet built entire industries nobody predicted.
Eventually.
"Eventually" for the industrial revolution included fifty years where British weavers starved. It included children in factories. It included generations whose lives were consumed in the gap between the old economy collapsing and the new one arriving.
There's a deep asymmetry here. Prediction gets cheap at software speed but judgment gets built at human speed. That mismatch is the core problem. Job displacement happens in quarters—a company can eliminate a department in a single earnings cycle—while judgment develops over careers, through years of accumulated context, mistakes, and mentorship. The institutions that would bridge the gap with retraining infrastructure, safety nets, and tax reform move at generational speed.
The gains from cheap prediction are flowing to the people who own the prediction technology. Productivity has risen steadily for forty years but wages for most workers barely moved. The National Academies estimates that 50 to 70 percent of increased wage inequality has been driven by automation technologies. AI is accelerating a trend that was already running. By mid-2025, the top 10 percent of earners were driving 49 percent of all consumer spending—the highest level since they started tracking it in 1989.
Consumer spending is 70 percent of GDP. So half the economy's demand depends on one tenth of the population continuing to feel confident. If stock prices drop or home values dip, that tenth pulls back and the other 90 percent can't make up the difference because their spending has barely kept pace with inflation. Economists call this a K-shaped economy. One group climbs, the other flatlines. It's structurally fragile in a way that has nothing to do with whether anyone is working hard enough.
If the cost of judgment-intensive services rises but the wages of the people providing judgment don't, someone is capturing the difference. Legal services illustrate this. AI made document production cheaper while partner rates at major law firms went up again this year. The output got cheaper. The outcome, to actually resolve your legal problem, didn't.
The economy is a loop. People buying from each other. The plumber fixes the tutor's sink. The tutor teaches the plumber's kid. Substitute AI for enough of those exchanges and the loop changes structurally. You can build the most productive company in history. If large parts of the population fall out of the income loop, you also shrank your market.
None of this is inevitable. We've redesigned these structures before — after industrialization, after the Depression, when we built public education and antitrust law. The economics from Green is correct. Judgment is the complement, the bottleneck, the thing that gets more expensive. The system from Blue is working against it. The reason from Black is that judgment resists the measurement the system depends on.
The pattern has held for two centuries. Whether it holds fast enough this time is the question of our generation.
🏔️ Après Ski: The Flywheel
The pitch right now is extraction. Take the existing work, do it cheaper, and keep the margin. This math has an obvious ceiling. Commodity work saturates and eventually demand collapses.
There is a pattern in the Valley right now that has a historical name: extraction during transition. The railroad barons did it when they built essential infrastructure, captured the land grants, set the prices, and let the consequences fall on the towns that depended on them. The AI version is faster. Investors and founders who positioned early are watching their stakes multiply by hundreds while publicly predicting that millions of jobs will disappear because the disappearance is the mechanism by which the stakes multiply.
This mindset is bleeding into corporate America more broadly, where CEOs who have no equity in AI companies are still following this playbook of cutting headcount, citing AI on the earnings call, and then letting the stock respond. Nobody is asking whether the cut was justified by the economics or by the narrative, because right now the market rewards both identically and there is no accountability structure that distinguishes between them. This is the part that connects to everything else in this series. Accountability is a human function. Machines don't bear consequences for decisions.
Two things happen when prediction gets cheap. Jevons tells you the first one. People use more of it. Total demand for prediction goes up.
But remember the true promise of AI—cheaper problem solving. Problems that were never economically viable before—too local, too specific, too small to justify the cost. And humans do something machines don't. They see the new problems. They imagine work that doesn't exist yet because that's what creative intelligence does. David Autor has been making this point for years. New technology creates new work. Social media manager wasn't a job title in 2005. UX researcher wasn't a job title in 1998. These roles didn't replace old work. They did work that had no precedent because the technology that made them possible hadn't existed yet.
Every new problem solved generates new knowledge, which generates new problems worth solving. Human creativity plus cheap prediction spinning out into an ever-expanding body of work. The roles AI enables won't have names yet because that's how expansion works. The categories haven't been invented.
So the labor market isn't a pie to carve up. It's a flywheel. Instead of thinking all human work goes away, why not imagine how cheap prediction could grow labor to ten or a hundred times its current size.
Remember the slide showing how Silicon Valley sees a trillion dollars in labor and pitches AI as the way to capture it. I think they've conditioned us to look at the number and not ask the obvious question: What if cheap prediction doesn't shrink that trillion to zero but grows it to ten trillion. Or a hundred trillion. If AI is that good, why not? And, I will add, 100 trillion worth of good jobs. Skilled work, meaningful work, fulfilling work, work grounded in care for each other. Judgment-intensive roles that build knowledge over a career. The flywheel doesn't eat the labor market and software doesn't eat itself.
Nobody is going to believe in an alternative they can't see. People believe in alternatives someone builds and puts in front of them. The reason the extraction pitch dominates right now is that nobody has built the expansion pitch into something you can actually use. Something an organization can adopt and something a person can practice. What we need are organizations (and therefore products and services) that something that makes the economics real in someone's actual working life rather than in a pitch deck.
The question of who is responsible for how AI gets deployed will produce democratic pressure. It already is. When enough people experience displacement—or watch it happen to people they know—that becomes a political force. It is a predictable pattern. Trade policy after the China shock, labor law after industrialization and antitrust after the Gilded Age. Concentrated economic disruption produces institutional response but will the response be well designed or reactive? This depends entirely on the quality of the conversation happening now.
You might not see the movement yet but you're in it. William Sloane Coffin said hope criticizes what is, while hopelessness rationalizes it. Hope resists. Hopelessness adapts. Every time someone in this audience takes one of these ideas into their own workplace and makes a different decision—hires the junior person, redesigns the role around judgment, pushes back on the extraction pitch in a meeting—that's hope resisting. These conversations are getting better because the people having them are getting more able to move beyond the binary. And better conversations are how alternative visions become real. Someone builds something, then someone else sees it working. The next person believes it's possible because they watched someone do it.
Follow. Subscribe. Share this with the person in your organization who needs to hear it. And we appreciate support—we can't do this work without small donations.
Helen Edwards is a Co-Founder of Artificiality. She previously co-founded Intelligentsia.ai (acquired by Atlantic Media) and worked at Meridian Energy, Pacific Gas & Electric, Quartz, and Transpower.
The theme of the Artificiality Summit 2026 is Unknowing. We start with five speakers—David Wolpert, Caleb Scharf, Wakanyi Macharia-Hoffman, Gašper Beguš, and Nina Beguš—whose work converges on a single finding: the uncertainty surrounding AI is permanent, not a phase.
Jack Dorsey laid off 4,000 people and called it an AI strategy. But when AI creates time, the interesting question isn't whether to cut—it's what you choose to do with the time you've freed. Dorsey chose to give it away.