As companies allocate larger budgets towards AI models and agent-building tools, the focus is on where those dollars flow.
The early innings of the AI buildout have been defined by building AI compute and capacity, where hardware (i.e. chip) companies have been clear winners. Moving forward, the AI wave will increasingly depend on utilization--how effectively companies convert AI investments into productivity and profit. That transition places software at the center, a sector whose performance has lagged hardware and where AI monetization remains diffuse. Whether the next leg of opportunity comes from software, and what companies within software, will depend on two key frictions: integrating AI into enterprise workflows and finding sustainable ways to charge for it once it's there.
The AI value chain in software
The software “stack” can be divided into three layers: infrastructure, platform and applications1. Infrastructure providers such as Microsoft Azure and AWS supply the compute power and model-hosting services that underpin the AI economy. Returns here are already visible, with Azure revenues accelerating 20+% year-over-year, though continued utilization growth will be necessary.
Further up the stack, platform and applications face the tougher hurdle of integrating AI into enterprise workflows and proving value. While 44% of U.S. firms now pay for some form of AI model or service, much of that spending flows to general-purpose models like ChatGPT and Claude, rather than tailored enterprise applications2.
Businesses face three primary hurdles to deeper AI integration:
- Workflow ambiguity. Most firms still don’t know which business functions merit dedicated AI spend. Productivity gains are clear in coding and documentation tasks, but broader process integration remains difficult.
- Data readiness. Many organizations are still cleaning and organizing data to make it usable for AI systems.
- Data privacy and security. Especially in regulated industries, firms remain hesitant to expose proprietary data to cloud-based models, limiting the adoption of commercial AI tools.
Even as adoption grows, monetization remains nascent. Software companies are testing a range of pricing strategies—embedding AI into existing bundles, raising base prices or shifting toward consumption-based models. Amid this uncertainty, the eventual returns on AI applications are difficult to forecast, as are the likely winners.
The race to capture AI’s value
With monetization and business models still forming, the next question is one of value capture. Who stands to benefit as enterprise adoption deepens? For investors, the key question is whether incumbents can successfully reinvent themselves, or whether, as in past technology cycles, new entrants will capture greater share.
Incumbents such as Microsoft and Oracle have the advantages of scale, distribution and data access that power AI workflows and applications. Yet incumbency brings challenges too: legacy architectures, customer expectations for predictability and pricing concessions and pressure to sustain margins.
AI-native startups are more agile but face steep hurdles in acquiring customers and data access and distribution. Startups are burning significant capital3, not just on product development and R&D but also marketing and sales.
For now, AI software remains a story of broad adoption but narrow monetization. As companies allocate larger budgets towards AI models and agent-building tools, the focus is on where those dollars flow. Not all use cases will ramp at the same speed, but early patterns will reveal where durable value is forming, with incumbents extending their lead or upstarts capturing the next phase of AI growth.