Skip to main content
JP Morgan Asset Management - Home
Financial Professional Login
Log in
  • My Collections
    View saved content and presentation slides
  • Logout
  • Products
    Overview

    Products

    • Mutual Funds
    • ETFs
    • SmartRetirement Funds
    • 529 Portfolios
    • Alternatives
    • Separately Managed Accounts
    • Money Market Funds
    • Commingled Funds
    • Featured Funds

    Asset Class Capabilities

    • Fixed Income
    • Equity
    • Multi-Asset Solutions
    • Alternatives
    • Global Liquidity
  • Investment Strategies
    Overview

    Tax Capabilities

    • Tax Active Solutions
    • Tax-Smart Platform
    • Tax Insights
    • Tax Information

    Investment Approach

    • ETF Investing
    • Model Portfolios
    • Separately Managed Accounts
    • Sustainable Investing
    • Commingled Pension Trust Funds

    Education Savings

    • 529 Plan Solutions
    • College Planning Essentials

    Defined Contribution

    • Retirement Plan Solutions
    • Target Date Strategies
    • Retirement Income
    • Startup and Micro 401(k) Plan Solutions
    • Small to Mid-market 401(k) Plan Solutions

    Annuities

    • Annuity Essentials
  • Insights
    Overview

    Market Insights

    • Market Insights Overview
    • Guide to the Markets
    • Quarterly Economic & Market Update
    • Guide to Alternatives
    • Market Updates
    • On the Minds of Investors
    • Principles for Successful Long-Term Investing
    • Weekly Market Recap

    Portfolio Insights

    • Portfolio Insights Overview
    • Asset Class Views
    • Taxes
    • Equity
    • Fixed Income
    • Alternatives
    • Long-Term Capital Market Assumptions
    • Multi-Asset Solutions Strategy Report
    • Strategic Investment Advisory Group

    Retirement Insights

    • Retirement Insights Overview
    • Guide to Retirement
    • Principles for a Successful Retirement
    • Retirement Hot Topics
    • Social Security and Medicare Hub

    ETF Insights

    • ETF Insights Overview
    • Guide to ETFs
    • Monthly Active ETF Monitor
  • Tools
    Overview

    Portfolio Construction

    • Portfolio Construction Tools Overview
    • Portfolio Analysis
    • Model Portfolios
    • Investment Comparison
    • Heatmap Analysis
    • Bond Ladder Illustrator

    Defined Contribution

    • Retirement Plan Tools & Resources Overview
    • Target Date Compass®
    • Heatmap Analysis
    • Core Menu Evaluator℠
    • Price Smart℠
  • Resources
    Overview
    • Account Service Forms
    • Tax Information
    • News & Fund Announcements
    • Insights App
    • Webcasts
    • Continuing Education Opportunities
    • Library
    • Market Response Center
    • Artificial Intelligence
    • Podcasts
  • About Us
    Overview
    • Diversity, Opportunity & Inclusion
    • Spectrum: Our Investment Platform
    • Media Resources
    • Our Leadership Team
    • Our Commitment to Research
  • Contact Us
  • Role
  • Country
DST Vision
Shareholder Login
  • My Collections
    View saved content and presentation slides
  • Logout
Financial Professional Login
Search
Menu
Search
You are about to leave the site Close
J.P. Morgan Asset Management’s website and/or mobile terms, privacy and security policies don't apply to the site or app you're about to visit. Please review its terms, privacy and security policies to see how they apply to you. J.P. Morgan Asset Management isn’t responsible for (and doesn't provide) any products, services or content at this third-party site or app, except for products and services that explicitly carry the J.P. Morgan Asset Management name.
CONTINUE Go Back

The broader trend is clear: AI workloads keep proliferating, requiring ever more compute.

Headlines about AI infrastructure spending have started to feel almost hyperbolic. Companies are committing hundreds of billions of dollars, with Nvidia suggesting as much as $3-4 trillion in annual AI spending by the end of the decade. Against this backdrop, investors are asking: how much compute do we really need, and is this boom sustainable?

From Moore’s Law to a new compute S-Curve

The evolution of compute has long been defined by Moore’s Law, where transistor counts per computer chip have doubled roughly every two years while costs remain flat. That exponential progress powered faster, smaller and cheaper electronics for decades. But Moore’s Law has since run into physical and economic limits, making annual improvements in devices like smartphones and PCs less noticeable. Enter Nvidia, which has pioneered a new compute paradigm: parallel, accelerated computing1. This novel approach to computing is reinvigorating an industry built around Moore's Law and helping disseminate AI across industries, ushering in a new “S-curve” of compute demand.  

The first wave of AI spending focused on training large language models. Training is significantly, and increasingly, compute-intensive, but early LLM demands were manageable2. Today, compute needs are accelerating rapidly, particularly as more models move into production.

Inference is also evolving into a major driver of compute demand. Early inference was “single-shot” (quick responses based on pre-trained data) but is now shifting toward “reasoning inference,” which requires more compute but produces better outcomes and broadens AI use cases. Nvidia estimates that reasoning models answering challenging queries could require over 100 times more compute compared to single-shot inference3.

Business models are evolving around AI

Cloud providers now compete in large part on access to AI compute, making chips and infrastructure central to securing new business. Software firms are embedding AI into productivity, coding and customer support tools, aiming to monetize AI usage through subscriptions and enterprise contracts. Meanwhile, applications are broadening from digital domains into physical ones such as robotics and autonomous vehicles, further expanding demand.

Hyperscalers are also exploring ways to optimize their AI compute investments, including refining software algorithms and experimenting with specialized AI chips (ASICs) to make specific AI tasks more efficient. While debate continues regarding the long-term role of ASICs, the broader trend is clear: AI workloads keep proliferating, requiring ever more compute.

The road ahead

Skepticism about the sheer pace of AI investment and ROI is healthy and warranted. Foundational model builders like OpenAI and Anthropic represent roughly ~20% of AI capex4 by some estimates with still-nascent business models, while expanding players like Oracle are now tapping bond markets to finance AI infrastructure.

Still, beneath the near trillion-dollar headlines is a real computing platform shift decades in the making that is reshaping industries and business models. While this AI infrastructure buildout is unlikely to reverse, it foreseeably will not be a straight line, either. Not all participants will be winners, and the most revolutionary business models may not even exist yet. Active management, with an eye toward which companies can create enduring value, will be essential for investors as this story unfolds. 

1 Parallel computing architectures, such as GPU-based and multi-core systems, handle complex, data-intensive workloads (like AI, big data analytics, and video processing) far more efficiently than traditional computing approaches underpinned by Moore’s Law. By processing vast amounts of data simultaneously rather than sequentially, parallel computing systems enable faster computation and greater scalability.
2 The cost to train the most compute-intensive models has grown at a rate of 2.4x per year since 2016, with AI accelerator chips accounting for some of the most significant expenses. Source: Cottier et al., "The Rising Costs of Training Frontier AI Models," arXiv:2405.21015 (2024).
3 Source: Nvidia Glossary, AI Reasoning.
4 Source: NewStreet Research. 
 
c3d42a2a-9ec6-11f0-b59f-9f329f7c52a9

 

  • Artificial Intelligence
  • Technology