Skip to main content
logo
  • Investment Strategies
    Overview

    Investment Options

    • Alternatives
    • Beta Strategies
    • Equities
    • Fixed Income
    • Multi-Asset Solutions

    Capabilities & Solutions

    • Pension Strategy & Analytics
    • Global Insurance Solutions
    • Outsourced CIO
    • Sustainable investing
  • Insights
    Overview

    Market Insights

    • Market Insights Overview
    • Eye on the Market
    • Guide to the Markets
    • Guide to Alternatives
    • Market Updates
    • The Canada Economic and Market Update

    Portfolio Insights

    • Portfolio Insights Overview
    • Alternatives
    • Asset Class Views
    • Currency
    • Equity
    • Fixed Income
    • Long-Term Capital Market Assumptions
    • Strategic Investment Advisory Group
    • Multi-Asset Solutions Strategy Report
  • Resources
    Overview
    • Center for Investment Excellence Podcasts
    • Events & Webcasts
    • Insights App
    • Library
    • NEW: Morgan Institutional
  • About Us
    Overview
    • Diversity, Opportunity & Inclusion
    • Spectrum: Our Investment Platform
    • Our Leadership Team
  • Contact Us
  • English
  • Role
  • Country
Morgan Institutional
Search
Menu
Search
You are about to leave the site Close
J.P. Morgan Asset Management’s website and/or mobile terms, privacy and security policies don't apply to the site or app you're about to visit. Please review its terms, privacy and security policies to see how they apply to you. J.P. Morgan Asset Management isn’t responsible for (and doesn't provide) any products, services or content at this third-party site or app, except for products and services that explicitly carry the J.P. Morgan Asset Management name.
CONTINUE Go Back

The broader trend is clear: AI workloads keep proliferating, requiring ever more compute.

Headlines about AI infrastructure spending have started to feel almost hyperbolic. Companies are committing hundreds of billions of dollars, with Nvidia suggesting as much as $3-4 trillion in annual AI spending by the end of the decade. Against this backdrop, investors are asking: how much compute do we really need, and is this boom sustainable?

From Moore’s Law to a new compute S-Curve

The evolution of compute has long been defined by Moore’s Law, where transistor counts per computer chip have doubled roughly every two years while costs remain flat. That exponential progress powered faster, smaller and cheaper electronics for decades. But Moore’s Law has since run into physical and economic limits, making annual improvements in devices like smartphones and PCs less noticeable. Enter Nvidia, which has pioneered a new compute paradigm: parallel, accelerated computing1. This novel approach to computing is reinvigorating an industry built around Moore's Law and helping disseminate AI across industries, ushering in a new “S-curve” of compute demand.  

The first wave of AI spending focused on training large language models. Training is significantly, and increasingly, compute-intensive, but early LLM demands were manageable2. Today, compute needs are accelerating rapidly, particularly as more models move into production.

Inference is also evolving into a major driver of compute demand. Early inference was “single-shot” (quick responses based on pre-trained data) but is now shifting toward “reasoning inference,” which requires more compute but produces better outcomes and broadens AI use cases. Nvidia estimates that reasoning models answering challenging queries could require over 100 times more compute compared to single-shot inference3.

Business models are evolving around AI

Cloud providers now compete in large part on access to AI compute, making chips and infrastructure central to securing new business. Software firms are embedding AI into productivity, coding and customer support tools, aiming to monetize AI usage through subscriptions and enterprise contracts. Meanwhile, applications are broadening from digital domains into physical ones such as robotics and autonomous vehicles, further expanding demand.

Hyperscalers are also exploring ways to optimize their AI compute investments, including refining software algorithms and experimenting with specialized AI chips (ASICs) to make specific AI tasks more efficient. While debate continues regarding the long-term role of ASICs, the broader trend is clear: AI workloads keep proliferating, requiring ever more compute.

The road ahead

Skepticism about the sheer pace of AI investment and ROI is healthy and warranted. Foundational model builders like OpenAI and Anthropic represent roughly ~20% of AI capex4 by some estimates with still-nascent business models, while expanding players like Oracle are now tapping bond markets to finance AI infrastructure.

Still, beneath the near trillion-dollar headlines is a real computing platform shift decades in the making that is reshaping industries and business models. While this AI infrastructure buildout is unlikely to reverse, it foreseeably will not be a straight line, either. Not all participants will be winners, and the most revolutionary business models may not even exist yet. Active management, with an eye toward which companies can create enduring value, will be essential for investors as this story unfolds. 

1 Parallel computing architectures, such as GPU-based and multi-core systems, handle complex, data-intensive workloads (like AI, big data analytics, and video processing) far more efficiently than traditional computing approaches underpinned by Moore’s Law. By processing vast amounts of data simultaneously rather than sequentially, parallel computing systems enable faster computation and greater scalability.
2 The cost to train the most compute-intensive models has grown at a rate of 2.4x per year since 2016, with AI accelerator chips accounting for some of the most significant expenses. Source: Cottier et al., "The Rising Costs of Training Frontier AI Models," arXiv:2405.21015 (2024).
3 Source: Nvidia Glossary, AI Reasoning.
4 Source: NewStreet Research. 
 
c3d42a2a-9ec6-11f0-b59f-9f329f7c52a9

 

  • Artificial Intelligence
  • Technology
J.P. Morgan Asset Management

  • About us
  • Investment stewardship
  • Privacy policy
  • Cookie policy
  • Sitemap
  • Conflicts of interest disclosure
  • Quebec Complaints Handling Process Summary
J.P. Morgan

  • J.P. Morgan
  • JPMorgan Chase
  • Chase

READ IMPORTANT LEGAL INFORMATION. CLICK HERE >

The value of investments may go down as well as up and investors may not get back the full amount invested.

Copyright 2025 JPMorgan Chase & Co. All rights reserved.