Start your day with intelligence. Get The OODA Daily Pulse.

Home > Analysis > OODA Original > Disruptive Technology > Why AI Will Bankrupt DIY Enterprises (and Make $100B Winners Out of AIaaS)

“VCs are chasing AI like seven-year-olds chasing a soccer ball—whichever way the hype goes, everybody just runs after it.”

Jay Hoag, co-founder of TCV

Everyone and their dog is using AI right now. Your cousin is asking ChatGPT to write wedding vows, your design intern is pumping MidJourney for memes, and half the planet is suddenly a “prompt engineer.” Individuals adopt AI frictionlessly because the stakes are low – if it hallucinates, who cares? You get a funny answer, shrug, and move on.

Enterprises don’t have that luxury. If AI “hallucinates” inside a bank, it’s called fraud. If it makes something up in healthcare, it’s malpractice. And if it goes off the rails in a government setting, well…that’s tomorrow’s congressional hearing.

Here’s the truth: for enterprises, AI isn’t a weekend toy. It’s a minefield of compliance, security, risk, and cost. And the idea that most organizations are going to build their own large language models and risk frameworks in-house? Please. That’s like asking mid-market manufacturing firms to suddenly open nuclear reactors because electricity’s the new competitive edge. Possible? Sure. Smart? Absolutely not.

That’s why AI as a Service (AIaaS) isn’t just “the next SaaS”; it’s the only viable way enterprises are going to bring AI into their environments at scale. SaaS was hard enough. AI is SaaS with a personality disorder – adaptive, unbounded, and capable of doing brilliant and dangerous things at the same time. The question isn’t if enterprises will need AIaaS. The question is how fast they realize they can’t survive without it.

The Rise of Individual AI Use

Individuals have been the early shock troops of AI adoption. Employees didn’t wait for a governance committee to draft a 97-page acceptable-use policy – they just started using it. ChatGPT, MidJourney, Claude, Copilot…these tools slipped into workflows faster than any CIO could book a risk assessment meeting.

Why? Because personal use of AI is low-stakes and high-reward. If your AI-generated slide deck looks slick, you look like a hero. If it makes up a citation, you roll your eyes and blame the robot. Either way, you move faster, you experiment, you learn. There’s no procurement cycle, no compliance review, no board approval – just curiosity and caffeine.

That’s a problem for enterprises. On one hand, empowered employees are suddenly capable of producing many times their previous output. On the other hand, shadow AI is exploding behind the firewall – customer data in prompts, confidential plans floating in API calls, and no one watching where it all goes. It’s like giving every employee a pet dragon: amazing for productivity, less amazing when the office burns down.

This is the paradox: the more enterprises try to clamp down, the more their people will sneak AI in anyway. And the more they allow free-for-all adoption, the higher the odds of a compliance nightmare. The only sustainable answer? Separate the lanes: let individuals experiment, but give enterprises a safe, governed AIaaS backbone to channel that energy without lighting the place on fire.

Enterprise Reality: Complexity and Risk

Here’s where the fun stops. Enterprises can’t just toss AI into the mix and hope for the best. They’re juggling compliance, privacy, audit requirements, infrastructure sprawl, and boards that start sweating whenever the word “hallucination” shows up in a briefing.

As Jason Lopatecki, CEO of Arize AI, warns: “The uncertainty created by an evolving regulatory landscape clearly presents real risks and compliance costs for businesses that rely on AI systems.” Business Insider

And as Clara Shih of Salesforce AI puts it, building AI isn’t just about models—it’s about negotiating permissions, security, and sharing agreements. “These are important concepts, new risks, new challenges, and new concerns that we have to figure out together.” Salesforce

Think about it: when software misbehaves, you get a bug. Annoying, sure – maybe it costs some money, maybe you patch it. When AI misbehaves, you don’t just get a bug. You get a decision – confidently delivered, hard to explain, and potentially catastrophic. That’s not a patch Tuesday problem. That’s a “call legal and alert the regulators” problem.

The other killer? Cost and pace. Enterprises trying to deploy raw models in-house are suddenly in the talent arms race of the century. You don’t just need developers; you need GPU whisperers, MLOps architects, red-teamers, ethicists, and a small army of compliance specialists who can actually spell “GDPR.” And even if you hire them, by the time they finish building your “enterprise AI stack,” the underlying model will have already changed three times. It’s like trying to build a skyscraper during an earthquake – the ground won’t sit still long enough for you to pour the foundation.

This is why most enterprises stall out at the “innovation lab prototype” phase. The risk surfaces are enormous, the skillsets are scarce, and the infrastructure bill looks like a NASA launch manifest. Which is exactly why AI as a Service exists: so companies don’t have to build a research lab just to keep up with PowerPoint.

Who Builds In-House? (The Minority Path)

Yes, some organizations will build their own AI internally. But let’s be clear: they’re the exception, not the rule. We’re talking about the sovereign whales – national labs, intelligence agencies, hyperscalers, large-scale industrials, and tier-one banks that treat latency, secrecy, or trading edge as life-and-death issues. For them, outsourcing AI is like outsourcing their oxygen supply; not gonna happen.

These players build “AI factories.” They’ve got the budget to hoard GPUs by the container ship, the recruiting power to vacuum up PhDs, and the regulatory posture that basically screams, “Thou shalt not trust a vendor.” If you’re the Department of Energy running nuclear simulations, or JPMorgan trying to squeeze nanoseconds out of an algorithmic trade, yes, you probably need to own your models, your infrastructure, and your people.

But let’s not kid ourselves: this is a rarefied club. Even Fortune 100 firms choke when they realize the sustained burn it takes to keep pace with model updates, safety evaluations, and red-teaming. Building AI in-house isn’t just hiring a few data scientists. It’s a perpetual war chest measured in billions, with a rotating cast of researchers, ethicists, lawyers, and engineers keeping the system upright.

For 99% of enterprises, this path will end in a smoking crater of cost overruns and stalled projects. Which brings us to the smarter question: if you’re not in the tiny circle where AI is your moat, why the hell would you build your own nuclear reactor when the power grid is right there?

Buy vs. Build: Lessons from SaaS

I lived this fight at Yahoo. We ran our world on FreeBSD, and let me be clear: it was a badass system. Tuned to perfection by some brilliant engineers, it powered production at massive scale and did it exceptionally well. I’d never cut it down – it was elegant, performant, and a testament to what great engineering can do.

But here’s the catch: what worked beautifully for Yahoo-scale production was a nightmare when it came to enterprise software. Running payroll, HR systems, or vendor applications on our custom FreeBSD flavor was like forcing a Ferrari to tow a moving van. The software wasn’t built for it, and anyone coming in from outside Yahoo had to be retrained just to navigate our uniqueness. Scale, support, and standardization became constant headaches.

The move to Red Hat wasn’t some glorious epiphany. It was painful. People resisted. Engineers hated giving up control. Leadership worried about costs. But eventually, reality set in: it was easier to have Red Hat support on call when shit went sideways than to keep duct-taping enterprise systems onto our custom stack. Standardization made life simpler, even if it stung our pride.

That’s the SaaS lesson. You can build the perfect internal system, and for certain domains – like Yahoo’s production environment – it makes sense. But when you’re talking about enterprise functions, you don’t want uniqueness. You want reliability, support, and standardization. That’s where SaaS won, and it’s exactly where AI as a Service is about to win, too. Build if it’s core to your moat. Otherwise, stop forcing Ferraris to tow moving vans.

SaaS vs. AIaaS: The Defining Difference

On the surface, SaaS and AIaaS look like cousins. Both are cloud-delivered, subscription-based, and promise to take ugly complexity off your plate. But under the hood? They’re different animals.

SaaS was contained. You bought Salesforce, and it managed your leads. You bought Workday, which manages HR. The workflows were predefined, the outputs were predictable, and compliance could be handled with static controls. It wasn’t painless, but it was at least bounded.

AI is unbounded. It doesn’t just execute a workflow – it thinks through one. The output isn’t a static record; it’s an adaptive, probabilistic decision. That’s magical when it works – and horrifying when it doesn’t. A hallucination in SaaS is a bug. A hallucination in AI is a decision confidently delivered to a regulator, a patient, or a customer.

That’s the defining difference: SaaS solved productivity problems; AIaaS manages intelligence. And intelligence isn’t a one-and-done product. It requires continuous evaluation, monitoring, red-teaming, guardrails, and updates. You don’t “set it and forget it.” You supervise it, like a genius intern who can draft 200 pages in an hour but occasionally invents case law.

This is why AIaaS matters. It’s not about shipping more features or shiny dashboards; it’s about delivering intelligence safely, consistently, and at scale – something enterprises can’t achieve by duct-taping models onto their stack. SaaS gave you software without servers. AIaaS gives you intelligence without a compliance nightmare.

The Case for AIaaS

So why does AI as a Service matter so much? Because it solves the four horsemen of enterprise AI: safety, risk, cost, and infrastructure.

Safety & Governance. Enterprises don’t need a model that can freestyle Shakespeare – they need a model that won’t leak customer data into the wild or generate something so biased it triggers a lawsuit. AIaaS gives you baked-in guardrails, monitoring, and red-teaming. It makes sure your AI stays inside the lines, even when the prompts don’t.

Risk & Compliance. If you’re a bank, a hospital, or a utility, you can’t just shrug and say, “Well, the model hallucinated.” Regulators don’t accept “the AI did it” as a defense. AIaaS delivers the audit trails, explainability, and policy enforcement you need to survive the compliance gauntlet. It’s not optional – it’s your shield.

Cost & Expertise. Building an internal AI team isn’t just expensive – it’s Sisyphean. You need GPU wranglers, MLOps architects, ethicists, security pros, and lawyers who can argue with regulators in three languages. And then you have to keep them for years, through every model update. AIaaS takes that insane fixed cost and turns it into a service bill, letting you rent expertise at scale.

Infrastructure & Scale. AI isn’t just software; it’s an industrial process. The compute demands are closer to semiconductor fabs than SaaS. Enterprises don’t want to fight for GPUs or rebuild pipelines every six months. AIaaS providers already run the factories, tune the workflows, and upgrade the stack – you just get the output.

In short: AIaaS isn’t a luxury. It’s the only way enterprises can survive the complexity curve. Without it, they drown in cost and chaos. With it, they can actually focus on their business instead of pretending to be OpenAI with worse engineers.

Market Growth: Proof AIaaS is Exploding

Let’s talk about scale. Depending on who you believe, the AI-as-a-Service market is already worth $16–20 billion in 2024–2025 and is on track to hit $90–105 billion by 2030. That’s a 35% CAGR, which in venture-speak translates to: buckle up, this rocket isn’t slowing down.

And that’s just the slice labeled “AIaaS.” Zoom out to enterprise AI spend overall, and the numbers get even wilder. IDC pegs it at $307 billion in 2025, doubling to $632 billion by 2028. Gartner goes further, projecting $644 billion in generative AI spending by 2025. Translation: enterprises aren’t debating whether to spend on AI – they’re debating how much risk they can stomach and how fast they can deploy.

Why does this matter? Because those curves don’t look like software curves. SaaS adoption was about features and productivity. AIaaS adoption is being pulled forward by fear and compliance. Enterprises aren’t buying AI because it’s cool; they’re buying it because not buying it leaves them behind, exposed, or out of business.

This is the dynamic investors love: when the adoption driver isn’t “shiny features” but existential necessity. AIaaS is shaping up to be less like Slack and more like cybersecurity – a non-negotiable line item where failure isn’t an option.

Validation: Who’s Scaling Successfully

It’s one thing to throw around TAM numbers. It’s another to see who’s already cashing the checks. Spoiler: the leaders of AIaaS are not startups in a garage – they’re the heavyweights, and they’re scaling fast.

OpenAI. Once “just” the model shop, OpenAI is now pulling in ~$12 billion in annualized revenue. They’ve set a $10M+ floor price for enterprise engagements, and landed a $200M Pentagon contract to prototype frontier AI for defense. Oh, and they launched a consulting arm that looks suspiciously like McKinsey with GPUs – embedding engineers directly into enterprises to deliver outcomes, not PowerPoint.

Microsoft. Azure’s AI business has become the crown jewel, with 60,000+ Foundry customers, 14,000+ using the Agent Service, and 80% of the Fortune 500 on the platform. Satya Nadella didn’t exaggerate when he said “every app will be an AI app” – and the revenue agrees. In Q2 FY2025, Azure credited double-digit growth points directly to AI services.

AWS. Bedrock and SageMaker aren’t side hustles anymore – they’re multi-billion-dollar businesses, growing triple digits year-over-year. Andy Jassy isn’t spinning fairy tales in shareholder letters; customers are lining up because building their own model ops is masochism, and renting Amazon’s factory is survival.

Accenture. Consultants aren’t sitting this one out. Accenture has booked $3 billion in gen-AI revenue already, with $1.4–1.5B in just the last two quarters. That’s not slideware – that’s billable hours turning into billions because enterprises need guides, fast.

Palantir. For the skeptics who think AIaaS is all hype, Palantir’s numbers beg to differ. Their AI Platform (AIP) helped push Q2 2025 revenue past $1.0B, up +48% YoY, with U.S. commercial revenue up 93%. They’re proving that when you package AI into an operating system with guardrails, regulated industries open their wallets.

The scoreboard is clear: the companies treating AI as a service – not just a product – are printing revenue and embedding themselves as infrastructure. The market isn’t waiting for “someday” –  it’s already happening.

Two Worlds: Individuals vs. Enterprises

Here’s the uncomfortable truth: enterprises are going to have to live in two AI worlds at once.

On one side, you’ve got individual empowerment. Employees want to use AI like they use Google: frictionless, creative, fast. They’ll write reports, brainstorm campaigns, and debug code with copilots in ways that make them 5–10x more productive. Clamp down too hard, and they’ll sneak it in anyway – shadow AI is already alive and well in every Fortune 500.

On the other side, you’ve got enterprise governance. Companies need guardrails, compliance, audit logs, and data security. They need to make sure that customer records don’t leak into prompts, that HR doesn’t accidentally run job descriptions through a public model, and that regulators don’t start circling like sharks.

Balancing these two worlds is brutal. One feels like chaos; the other feels like handcuffs. But here’s the catch: if enterprises actually want their people to thrive, they can’t just slam the door on personal AI use. They need both worlds. They need the free lane for individual creativity and the controlled lane for enterprise workflows.

That’s where AIaaS comes in. It’s the bridge. It lets employees thrive without burning the company to the ground. It gives enterprises the governance stack they need, while giving individuals the empowerment they crave. Without that bridge, you get either total chaos or total stagnation. With it, you get an ecosystem where innovation and safety can actually coexist.

Conclusion: The Investor’s Bet

SaaS isn’t dying – it’s morphing. The next era isn’t about renting software; it’s about renting intelligence with a safety net. AI as a Service is the power grid for enterprise AI, and without it, most organizations will either drown in complexity or blow themselves up trying to DIY.

The winners in this market won’t be the flashiest feature vendors. They’ll be the companies that solve the boring but existential problems: safety, governance, compliance, trust, and scale. Just like cybersecurity became a non-negotiable spend after the first wave of breaches, AIaaS will become a non-negotiable spend as soon as the first lawsuits, fines, and regulatory crackdowns roll in. (Spoiler: they already have.)

For investors, this is the thesis: AIaaS isn’t a nice-to-have – it’s the only viable operating model for 90% of enterprises. It’s not optional infrastructure; it’s survival infrastructure. That’s why the TAM curves look insane, why OpenAI is embedding consultants, why hyperscalers are printing billions, and why every boardroom is nervously asking, “Where’s our AI strategy?”So let’s call it: SaaS was about productivity. AIaaS is about survival. And survival markets are where $100B companies are born.

Daniel Riedel

About the Author

Daniel Riedel

Daniel Riedel is a venture capitalist, entrepreneur, and technologist with a career spanning the launch of the internet to the frontiers of AI and deeptech. He is the founder of GenLab Venture Studio, a venture firm building and scaling companies at the intersection of AI, national security, and planetary resilience, backed by global LPs and strategic partners.