Skip to main content
logo
  • Investment Strategies
    Overview

    Investment Options

    • Alternatives
    • Beta Strategies
    • Equities
    • Fixed Income
    • Global Liquidity
    • Multi-Asset Solutions

    Capabilities & Solutions

    • ETFs
    • Pension investment solutions
    • Global Insurance Solutions
    • Outsourced CIO
    • The power of active
    • Sustainable Investing
    • Investing in China
  • Insights
    Overview

    Market Insights

    • Market Insights Overview
    • Eye on the Market
    • Guide to the Markets
    • Guide to Alternatives
    • Investment Outlook 2026
    • Guide to Investing in Asia
    • Why Alternatives?

    Portfolio Insights

    • Portfolio Insights Overview
    • Alternatives
    • Asset Class Views
    • Currency
    • Equity
    • ETF Perspectives
    • Fixed Income
    • Long-Term Capital Market Assumptions
    • Sustainable Investing Insights
    • Strategic Investment Advisory Group
  • Resources
    Overview
    • Center for Investment Excellence Podcasts
    • Library
    • Webcasts
    • Morgan Institutional
    • Investment Academy
  • About us
    Overview
    • Our Leadership Team
  • Contact Us
  • Institutional Investors
    • LIQUIDITY INVESTORS
Search
Menu
Search
You are about to leave the site Close
J.P. Morgan Asset Management’s website and/or mobile terms, privacy and security policies don't apply to the site or app you're about to visit. Please review its terms, privacy and security policies to see how they apply to you. J.P. Morgan Asset Management isn’t responsible for (and doesn't provide) any products, services or content at this third-party site or app, except for products and services that explicitly carry the J.P. Morgan Asset Management name.
CONTINUE Go Back

With technological developments currently moving at breakneck pace, we think it is dangerous to have too much belief in a single approach.

On 19 November, Karen Ward, EMEA Chief Market Strategist, and Alex Whyte, Portfolio Manager in our International Equity Group, sat down to discuss all things artificial intelligence (AI). In light of the incredible engagement that we received from clients during the webinar, we have prepared responses to some of the most popular questions. A replay of the webconference can be viewed here. For a broader perspective on the AI outlook, view our Investment Outlook 2026 here.

What is the outlook for the monetisation of AI applications, and how do the options differ for business to business (B2B) and business to consumer (B2C)?


Investor attention is currently most focused on the direct monetisation of B2C customers (e.g. individuals paying for premium ChatGPT usage), given that it’s arguably the most visible sign of current willingness to pay. In reality, however, it’s only likely to be a small part of future revenue streams for AI model providers.

With many AI application providers still private companies, current revenue breakdowns can only be pieced together from a patchwork of various sources. We can, however, bucket potential revenue streams into four groups. 

B2C

1) Direct consumer subscriptions

History provides many examples of tech companies who have first grown a loyal user base, and then later looked to charge for what previously was offered as a free service. The key risk to this approach is that consumer willingness to pay is typically low, particularly when (potentially cheaper) alternatives exist. For context, Spotify and Netflix, two of the world’s most successful consumer subscription companies, are expected to deliver around $20bn and $50bn of revenue respectively this year. OpenAI is projecting eyewatering user growth over the next five years, but even they only expect fewer than 10% of consumers to be willing to pay for a subscription in 2030, according to reporting by The Information.  

2) Indirect consumer monetisation

Instead of asking users to pay for services directly, AI applications could generate profits via adverts, or by allowing users to direct AI ‘agents’ to carry out tasks such as online bookings on their behalf, while taking a cut from the supplier companies. Advertising is undoubtedly a huge potential market: Meta alone should make close to $200bn of revenue this year, mainly from ads. This is, however, a very different approach to the original proposition for investors in AI. Online advertising is also a very mature market, so if AI companies do start to boost ad revenues, this is likely to a) cannibalise existing revenue streams of other megacap tech names, and b) be far more capex intensive. 

B2B

1) Direct access to models

AI companies are already monetising their technology by charging corporates for ‘API calls’ – in other words, where a business pays for a direct link into a large language model (LLM) and is charged for every query that they run.

This route allows companies to build their own use cases on top of a base product, and is one of the highest potential revenue streams for AI applications. As a result, trends in the rollout (and/or cancellation) of such partnerships are receiving careful scrutiny, which also explains why August’s MIT Media Lab study that highlighted 95% failure rates in AI pilots was taken so negatively by many investors.

2) AI agents

Similar to how companies can generate revenues by offering AI agents to do real-world tasks for consumers, they can also do the same for businesses. A large number of AI startups are focused in this area, promising solutions that can handle tasks currently done by humans, but cheaper.

In our view, this route has the largest potential upside for revenues given the material cost-savings that agents could offer companies, thus justifying a high price for the technology. The devil is in the details, however, with reliability and accuracy issues currently preventing widespread adoption. We also take most recent reports of ‘jobs replaced with AI’ with a large pinch of salt, given the temptation for businesses to use this explanation to mask weakening demand. 

How real is the threat of technological disruption for today’s AI megacaps? 

With technological developments currently moving at breakneck pace, we think it is dangerous to have too much belief in a single approach.

The current boom was catalysed by the breakout success of LLMs. These models simulate human-like conversation, through ‘training’ with vast computing power on huge troves of data, to ‘learn’ what to say in given situations. Most of the current AI developments are looking to improve the capabilities, or accuracy, of our current LLMs, and harness them to solve certain problems. The types of chips produced by companies like NVIDIA are uniquely well-suited to the kind of maths that underpins these kinds of large models.

We see three potential risks to this approach. First, there is a risk that further development of LLMs will see diminishing returns. Different generations of models (e.g. GPT-4 vs. GPT-5) are already seeing smaller leaps in progress. This presents a real problem to the incumbent companies, given that it challenges their ability to stay ahead of competition, while also raising questions about how useful models can ultimately become in an enterprise environment.

Today, model improvements are largely reliant on using more computing power to train models on ever larger data sets. If instead, technological breakthroughs allow companies to improve models in a less resource-intensive fashion, this could materially impact current assumptions about the computing power required over the coming years.

Second, January’s ‘DeepSeek moment’, where a Chinese startup found a less resource-intensive way to build a foundational AI model, is a good reminder that new techniques could emerge that again challenge the incumbents. What if more companies outside of the current AI ecosystem find cheaper or faster ways to build new models? This would not only threaten valuations of the incumbents but also radically shift demand for chips across the supply chain.

Third, is small the new large? If LLMs see progress stagnate as described above, it may be that small language models that focus on completing specific tasks in the least resource-intensive way become far more attractive. Instead of requiring huge data centres, these types of models could be run on personal devices, again with huge implications for current AI capex plans, where expectations have only been heading higher. 

What is the outlook for chip demand? If AI-driven demand falters, could alternative demand sources compensate?

Most AI applications currently run on GPUs (graphics processing units), the chips pioneered by NVIDIA. These chips are extremely well-suited to training and running AI models, but they are also expensive, and currently limited in supply.

As a result, some companies have pivoted from GPUs to application specific integrated circuits (ASICs). These are custom-designed chips to run AI applications, produced for a single customer. Of course, not all businesses have the capabilities to design and roll out their own chips, so the major players have been companies like Google (with their TPUs, or tensor processing units) and Amazon (with their ‘Trainium’ chips).

For those who are successful, this approach not only buys companies some autonomy from NVIDIA but can also deliver lower-cost chips that are more tailored to their specific needs, though potentially with performance trade-offs. Companies most geared to the rise in ASICs include Broadcom, who work with these large customers to design the silicon they need.

At present, we don’t expect the rise of ASICs to threaten NVIDIA’s dominant market share. We do, however, expect ASICs demand from large customers to continue to grow, and we would argue that more options for chip consumers should be a long-term positive for the market.

If chip demand for AI applications does wane, it is hard to see how other use cases could step in to absorb projected supply. Taking GPUs as an example, more than $115bn of NVIDIA’s estimated $130bn in 2025 revenues is coming from the data centre segment, a business line worth less than $3bn as recently as 2020. Gaming, the other main demand driver for GPUs, accounts for just $11bn this year.

We see the same at other chip suppliers, where the sheer scale of AI spending is dwarfing all other segments. Other areas of demand, like cryptocurrency mining, are nowhere near big enough to even start to replace AI data centre demand. Other heavy users of semiconductors, such as the smartphone and PC/laptop industries, are in a very low growth phase, so are unlikely to bail out investors if projected AI chip demand does not come through and we are left with excess capacity.

How concerned should we be about circular financing agreements?

When the tech bubble burst in the early 2000s, it became apparent that a lot of the ‘profits’ that were supposedly being generated were, in reality, cashflows that were simply being exchanged between different tech companies despite no economic value being created. For example, company A bought online advertising space from company B, who then used that revenue to buy advertising space from company C, who then reinvests that money into company A.

Today’s situation looks very different. Many large tech companies already generate incredibly impressive earnings, and unlike the flimsy nature of corporate balance sheets 25 years ago, today’s biggest players are sat on cash piles worth hundreds of billions of US dollars. Critically, this financial strength has so far allowed many of the hyperscalers to fund their AI-related capex from free cash flow. We note, however, that capex commitments planned for the coming years will eat further into available cash piles.

The vast array of deals made between different members of the AI ecosystem over recent months does, however, have some echoes of the late 90s. September’s announcement that NVIDIA plans to invest up to $100 billion in OpenAI was arguably the highest profile of these investment deals. OpenAI, in turn, will funnel much of this investment into securing new compute capacity, which will then drive demand for NVIDIA’s chips.

Optimists would argue that NVIDIA is using the cash on its balance sheet to simply bring forward future demand. Yet, the more the fate of individual companies becomes intertwined, the greater the risk that a single failure could lead the broader system to unravel. 

  • Equities
  • Artificial Intelligence
  • Markets
J.P. Morgan Asset Management

  • About us
  • Investment stewardship
  • Privacy policy
  • Cookie policy
  • Sitemap
J.P. Morgan

  • J.P. Morgan
  • JPMorgan Chase
  • Chase

READ IMPORTANT LEGAL INFORMATION. CLICK HERE >

The value of investments may go down as well as up and investors may not get back the full amount invested.

Copyright 2025 JPMorgan Chase & Co. All rights reserved.