The SaaS Apocalypse Is a Category Error
Why Wall Street's panic over AI agents misunderstands what makes enterprise software valuable—and what makes AI useful
This week, traders coined a new term: "SaaSpocalypse." Following Anthropic's release of new plugins for Claude Cowork, software stocks shed hundreds of billions of dollars in market value in a matter of days. The entire software aristocracy—Microsoft, Oracle, Adobe, Salesforce, ServiceNow, DocuSign, Workday, SAP—went into free fall. Analysts called it "get me out" style selling.
The thesis driving this panic is straightforward: AI agents can now perform tasks that previously required enterprise software. Why pay for Salesforce if an agent can manage your customer relationships? Why subscribe to legal software if Claude can review contracts? The logic seems compelling, and the market is pricing in what feels like an extinction event.
To be clear: it's certainly possible that AI agents could advance to a point where they dynamically assemble all the context required across every system in an enterprise—databases, email, communication platforms, financial systems—and act on any request at any moment. That future may arrive someday.
But, for now, humans—and AI—rely on the relational information that is captured in the same software systems investors are thinking will be left behind. This context allows AI agents to do their job more easily and more effectively, without rebuilding software from scratch for each request. The question isn't whether agents are powerful. It's whether the path forward is replacement or collaboration.
I think the current analysis commits a fundamental category error—one that reveals a deeper misunderstanding about both what enterprise software actually provides and what makes AI agents useful.
Reading this alone? Join others who are working through what it means to stay human while working with AI.
The Yard Sale That Doesn't Make Sense
Let's take the argument seriously. The market seems to expect that enterprises will conduct a yard sale of their IT infrastructure, walking away from Salesforce and Workday and the rest in favor of AI agents that can spin up whatever capability is needed on demand.
But consider what you're actually walking away from.
A CRM system like Salesforce isn't just software. It's the accumulated memory of your customer relationships—years of interactions, preferences, tendencies, and history, organized and curated over time. It's the record of what worked and what didn't, which deals closed and why, which customers churned and what preceded their departure. This isn't data sitting in a database. It's the encoded intelligence of how your organization relates to the people it serves.
The same is true across enterprise systems. HR platforms contains the dynamics of how your organization develops and retains talent. Financial systems encode the rhythms of your business operations. Project management tools hold the institutional memory of how work actually gets done—not the org chart version, but the real one. And all of these systems govern the access, privacy, and security of a company's information across the complex system of employees, customers, and partners.
When you imagine "replacing" these systems with AI agents, ask yourself: where does the agent get its context? Where does it learn what matters to your organization, what's been tried before, what the constraints and preferences are?
The answer, of course, is that the agent needs to access the information in the very systems the market expects it to replace.
The Context Problem
Here's what I think the market is missing: the value of AI systems is substantially determined by the context they can access. An agent helping you figure out what to do today needs to know who you are, what you're working on, what's worked before, and what others in your organization have learned. Without that context, it's just a capable tool operating in a vacuum.
Agents can hold only so much information at once. The coherence that allows for real assistance—understanding your situation not just in this moment but across time—depends on organized systems that maintain and structure that knowledge.
Think about what continuity across time actually requires: knowing not just the current state of your customer relationships, but their trajectory. Understanding not just this quarter's numbers, but the patterns that explain them. Recognizing not just what you're doing, but why, and how it connects to what your colleagues are doing.
This is precisely what enterprise systems provide. They're not just data stores—they're the substrate for organizational coherence. They allow patterns to be recognized, learning to accumulate, and knowledge to be shared across an organization.
An agent without access to these systems is like a brilliant new hire on their first day: capable, perhaps even impressive, but lacking the institutional knowledge that makes real contribution possible. You wouldn't expect a new employee to replace your entire organization's stored wisdom. Why would you expect an agent to?
What Conversational AI Actually Solves
If the replacement narrative doesn't hold, what explains the excitement about AI agents? Why do people—myself included—find conversational AI so compelling for enterprise work?
The answer isn't in replacing software. It is in making software more usable.
Here's a recent example from my own work. I've been managing a Google Ads campaign, and something isn't performing as expected. I receive an automated email telling me our ads are "limited" and offering several possible reasons why. The email suggests I read through various help documents and then figure out what to do.
This is a familiar experience for anyone who works with enterprise systems. The software is powerful, but the power is buried under layers of menus, settings, and documentation. Figuring out how to accomplish something specific—pinning my brand name at the beginning of an ad, say—requires navigating an interface designed for every possible use case rather than my particular one.
I tried Google's beta marketing assistant, and something clicked. I could describe what I was trying to accomplish, ask questions about what wasn't working, and have the system guide me to the right settings. It could even take over the browser window to demonstrate. The experience was slow and imperfect—it's beta software—but it pointed toward something important.
The conversational interface didn't replace Google Ads. It made Google Ads accessible. It translated between my intent and the system's capabilities.
This is the real value proposition of AI agents in enterprise contexts: not replacement, but relationship. A way of engaging with complex systems that adapts to your understanding rather than demanding you adapt to theirs.
Layering, Not Replacing
The conversational interface offers something new: a dynamic, adaptive layer that sits between human intent and system capability. You describe what you're trying to accomplish; the system interprets your intent and guides you through or executes the task. The underlying infrastructure—the data, the processes, the built-up organizational knowledge—remains intact. What changes is how you access it.
This suggests a very different investment thesis than the one currently driving the market. The relationship between AI agents and enterprise software isn't competition—it's complementarity. Agents are an interface layer, not a replacement layer.
This doesn't mean every software company will thrive. The transition will create winners and losers based on who adapts successfully. Companies that fail to integrate agentic capabilities, that treat AI as a threat rather than an opportunity, that don't evolve their interfaces and architectures—they may well struggle.
But the broad thesis that AI agents make enterprise software obsolete misunderstands what both are for. Enterprise software provides the organized context that makes work coherent over time. AI agents provide the adaptive interface that makes that context accessible to human intent.
They need each other.
The market is pricing in a war. What's actually emerging is a collaboration. And the companies being sold off at liquidation prices may turn out to be the essential infrastructure for everything that comes next.
Why This Matters for How We Design
The panic selling reflects a particular vision of AI: systems that replace human work, that substitute machine capability for human effort. In this view, AI and humans are in competition. What agents can do, humans (and the software humans use) become unnecessary for. The yard sale vision imagines we can walk away from developed knowledge, embedded dynamics, and human systems—and that agents operating from blank context can somehow replace all of that.
Return to my Google Ads example, because there's a second part to the story. The system offers automated recommendations—a list of suggested optimizations with a button that says, essentially, "click here and we'll take care of it all." But here's the problem: as a nonprofit using Google Ad Grants, we're required to maintain a click-through rate above 5%. This is Google's own rule. Yet the automated recommendations don't account for it. The system is optimizing for one set of objectives while missing a constraint that could disqualify us entirely.
This is the normal state of software systems. They don't have perfect knowledge. They aren't omniscient. Context is always incomplete, and important constraints can fall through the gaps. We're used to this. It's why we can't simply defer to agents and assume they'll know what to do. Someone has to remain in the decision-making seat, applying judgment to catch what the system misses or was never told.
This isn't how we think about AI at Artificiality. Our design work starts from a different premise: that AI should work for human minds rather than in place of them. Systems that extend our capabilities, help us navigate complexity, and support the context and continuity that meaningful work requires—while keeping humans where they need to be.
The category error isn't just an investment mistake. It's a design philosophy we're betting against.