Growth Equity Partners

Decoding AI adoption: Unveiling insights for future growth

The release of Open AI’s ChatGPT to the public in November 2022 has changed the trajectory of AI’s adoption.

As ChatGPT became the fastest-growing consumer application in history, the large language model (LLM) that powers it caught the attention of enterprises and turned AI adoption into a B2C2B1 – business to consumer to business – journey.

AI is now a top C-suite priority, creating a massive new area of spending across AI infrastructure and applications. In fact, while enterprises largely reduced software spending in 2023, generative AI defied this trend.

The tech stack around the space has also evolved considerably, thanks to robust fundraising for startups and established technology companies plunging billions into R&D to strengthen their AI offerings.

However, the dizzying array of tools and techniques now available to develop an LLM architecture obfuscates how enterprises actually make purchasing decisions.

Understanding the multiple options available to CTOs and how they think about incorporating LLMs into their organizations is critical to envisioning how the AI ecosystem might grow and evolve. This landscape will remain complex for some time, but our conversations with practitioners clarify a framework for thinking about emerging corporate access patterns.

V1 to VF: Three typical ways to access LLMs

Three general models of how companies access LLMs have emerged. In order of increasing complexity, these are:

  1. Use off-the-shelf applications or call APIs provided by standalone foundation model vendors (e.g. OpenAI or Cohere)
  2. Pursue a more involved relationship with standalone vendors or incumbent tech companies (e.g. Google, Microsoft, Amazon, Databricks, Snowflake)
  3. Develop a self-hosted model

Generally, enterprises opt for the simplest method that accomplishes the goals they set out to achieve with LLMs. However, further considerations, such as privacy, cost, vendor onboarding and existing tech stacks can change the calculation. The popularity of each access model will ultimately determine the main beneficiaries of enterprise investment in AI and machine learning (ML).

A line chart demonstrating the stages in AI/Ml journey starting from experimentation to VF - It is highly differentiated and demonstrating the complexity & level of investment of each stage.

Source: J.P. Morgan Asset Management; as of March 31, 2024

V1: Getting started with off-the-shelf applications

The simplest access models are off-the-shelf applications or direct APIs through LLM providers. Enterprises utilize LLMs by paying per token (typically a small chunk of text – four English characters according to OpenAI) without having to manage their own infrastructure. Architecturally, this model is as straightforward as embedding ChatGPT to field open-ended questions from customers (think Expedia) or inserting the logic of an LLM behind an enterprise’s own front-end interface to handle searches.

Powering existing applications with the logic of LLMs often greatly improves performance vs. existing substitutes (when have you found a website’s FAQ section more helpful than ChatGPT?), making this path ideal for experimentation and getting to “version 1” of an LLM implementation. Standalone foundation model vendors, such as OpenAI, Anthropic, Cohere and AI21, all offer easy-to-use solutions in this category.

The pay-per-token model is lucrative for LLM providers. For example, OpenAI’s GPT4 can cost around $30 per 1 million input tokens and $60 per 1 million output tokens, quickly escalating costs for customers with heavy usage. Consider this example: a chat bot on a travel app like Expedia receives thousands of customer queries per day, and queries typically consist of several tokens. If we assume Expedia’s 112 million monthly unique visitors ask one question per month such as, “What would be a good itinerary for a one-week trip to the Amalfi Coast?” and receive a detailed response (1,697 characters long in our test), we arrive at nearly $35 million/year spend across Expedia’s user base. It’s easy to see this cost exploding with increased usage. Therefore, enterprises must configure their generative AI apps carefully – identifying common questions and routing users to existing content vs. generating entirely new content for each query, shortening potential inputs and outputs and optimizing responses to reduce follow-up questions.

In summary, common considerations with using off-the-shelf models via API include:

Privacy: Enterprises preferring to retain their proprietary data often opt to host models within their existing cloud infrastructure. Companies in the financial and medical industries, in particular, follow this path.

Cost: Companies must track the throughput of their LLM-based applications and ensure the volume of tokens used is manageable. At some point, self-hosting an open-source model is significantly cheaper than paying by token or for committed volumes with a closed-source model.

Specialization: Using general models in specific domains like financial markets or technology can lead to undesirable and inaccurate outputs.

For enterprises with straightforward, low-volume use cases – such as basic customer chatbots or enterprise search – the off-the-shelf model makes sense; this will be the case for countless companies, especially in the mid-market and long tail of the Fortune 2000. Enterprises with more robust needs, for example a search tool that understands nuances of pharmaceuticals, or constraints, such as strict data retention rules, will find this type of access too limited, pushing them to the next option.

V2: Upgrading to AI suites

After an enterprise gets its LLM-powered application off of the ground, it may seek some controls – privacy guarantees, negotiated pricing and handholding through customization of proprietary data. These “V2” LLM-based applications are more deeply integrated, somewhat customized and provide additional value vs. off-the-shelf options.

Players like OpenAI and Cohere are winning large enterprise contracts for V2 access, but incumbent tech companies – namely large cloud service providers (CSPs), such as Azure OpenAI Service, Google Vertex AI and AWS Bedrock – are rushing to seize on the LLM opportunity with several tailwinds behind them: captive customer bases and committed spend, compliance and security guarantees and vast product suites complementing LLMs. In industries with legacy tech stacks and stringent regulatory oversight, CSPs’ suites are especially enticing. For example, hospital software provider Epic announced a sprawling generative AI product suite built on top of Azure OpenAI Service.

Databricks and Snowflake are also betting that their proximity to enterprises’ data will naturally position them to capture burgeoning AI spend. Databricks took this a step further with is release of DBRX – its open-source, general-purpose LLM. The company pitches its Mosaic AI Foundation Model product as the ideal managed solution for deploying DBRX: customers can privately host DBRX, work with Databricks to deploy DBRX on their own hardware and fine-tune the model on their own data.

Microsoft’s OpenAI Service offers an interesting look into how companies are accessing generative AI. Microsoft maintains a close relationship with OpenAI, which has a standalone product and fine-tuning capabilities accessible via API. Practitioners highlight an emerging pattern: companies experiment on OpenAI’s platform then shift to Microsoft’s Azure OpenAI Service when going to production.

Startups in the space face a difficult decision – they need to meet customers where they are, but potentially lose their branding power and pricing leverage when slotted alongside several other models within a broader marketplace. Some enterprises also wish to avoid being locked into a specific model and preserve optionality. Additionally, the tech majors sometimes prioritize their own models above third-party models, further alienating the standalone LLM vendors. Expect startups – Anthropic, Cohere, AI21 and others – to deepen partnerships with incumbents while building up their own product suites and ecosystems.

VF: Self-hosting LLMs

Optimizing for cost, privacy and cybersecurity needs and specialization, enterprises may decide to host models themselves. In this paradigm, enterprises can choose an open-source model, such as Llama 2, and deploy it on their own infrastructure. Although self-hosting obviates spend on a proprietary model, not every enterprise is up to the task of managing its own infrastructure or customizing a model with its own data. Longer term, we expect more enterprises to pursue this route as they differentiate their LLM-powered applications and get to the final version (VF). Look for startups hosting open-source models (HuggingFace) and those reducing the barriers of customization (Weights & Biases, Comet, Galileo) to power this leg of the enterprise LLM journey. 

Tracking enterprise access patterns

New LLM vendors are growing their enterprise presence and product suites but they must navigate competing offerings from incumbent tech companies – often betting on both their standalone products and selling through services like AWS Bedrock (which offer several models in one place). Enterprises often initially follow the path of least resistance, experimenting with models directly through the foundation model vendors’ APIs. Over time, they may turn them into products within the comfort of a larger suite (OpenAI, Azure OpenAI or Databricks) or via a self-hosted solution. Expect large, highly regulated enterprises to choose products from the tech majors and smaller and/or less-regulated companies to engage directly with standalone LLM vendors. The savviest customers will opt to self-host.

AI has been solidified as a top C-suite priority and isn’t going anywhere. Enterprises will continue to incorporate LLMs into their organizations and must be selective as they develop LLM architecture with the backdrop of a growing AI ecosystem. This incorporation of AI creates a massive new era of spending across AI infrastructure and applications expected to exceed $1.8 trillion2 by 2030.

Janelle Teng at BVP coined this phrase.
 2 Grand View Research.
09x3241504133025

Copyright 2025 JPMorgan Chase & Co. All rights reserved.

This website is a general communication being provided for informational purposes only. It is educational in nature and not designed to be a recommendation for any specific investment product, strategy, plan feature or other purposes. By receiving this communication you agree with the intended purpose described above. Any examples used in this material are generic, hypothetical and for illustration purposes only. None of J.P. Morgan Asset Management, its affiliates or representatives is suggesting that the recipient or any other person take a specific course of action or any action at all. Communications such as this are not impartial and are provided in connection with the advertising and marketing of products and services. Prior to making any investment or financial decisions, an investor should seek individualized advice from personal financial, legal, tax and other professionals that take into account all of the particular facts and circumstances of an investor's own situation.

 

Opinions and statements of financial market trends that are based on current market conditions constitute our judgment and are subject to change without notice. We believe the information provided here is reliable but should not be assumed to be accurate or complete. The views and strategies described may not be suitable for all investors.

 

INFORMATION REGARDING INVESTMENT ADVISORY SERVICES: J.P. Morgan Asset Management is the brand for the asset management business of JPMorgan Chase & Co. and its affiliates worldwide. Investment Advisory Services provided by J.P. Morgan Investment Management Inc.

 

INFORMATION REGARDING MUTUAL FUNDS/ETF: Investors should carefully consider the investment objectives and risks as well as charges and expenses of a mutual fund or ETF before investing. The summary and full prospectuses contain this and other information about the mutual fund or ETF and should be read carefully before investing. To obtain a prospectus for Mutual Funds: Contact JPMorgan Distribution Services, Inc. at 1-800-480-4111 or download it from this site. Exchange Traded Funds: Call 1-844-4JPM-ETF or download it from this site.

 

J.P. Morgan Funds and J.P. Morgan ETFs are distributed by JPMorgan Distribution Services, Inc., which is an affiliate of JPMorgan Chase & Co. Affiliates of JPMorgan Chase & Co. receive fees for providing various services to the funds. JPMorgan Distribution Services, Inc. is a member of FINRA FINRA's BrokerCheck

 

INFORMATION REGARDING COMMINGLED FUNDS: For additional information regarding the Commingled Pension Trust Funds of JPMorgan Chase Bank, N.A., please contact your J.P. Morgan Asset Management representative.

 

The Commingled Pension Trust Funds of JPMorgan Chase Bank N.A. are collective trust funds established and maintained by JPMorgan Chase Bank, N.A. under a declaration of trust. The funds are not required to file a prospectus or registration statement with the SEC, and accordingly, neither is available. The funds are available only to certain qualified retirement plans and governmental plans and is not offered to the general public. Units of the funds are not bank deposits and are not insured or guaranteed by any bank, government entity, the FDIC or any other type of deposit insurance. You should carefully consider the investment objectives, risk, charges, and expenses of the fund before investing.

 

INFORMATION FOR ALL SITE USERS: J.P. Morgan Asset Management is the brand name for the asset management business of JPMorgan Chase & Co. and its affiliates worldwide.

 

NOT FDIC INSURED | NO BANK GUARANTEE | MAY LOSE VALUE

 

Telephone calls and electronic communications may be monitored and/or recorded.

 

Personal data will be collected, stored and processed by J.P. Morgan Asset Management in accordance with our privacy policies at https://www.jpmorgan.com/privacy.

 

If you are a person with a disability and need additional support in viewing the material, please call us at 1-800-343-1113 for assistance.

 

READ IMPORTANT LEGAL INFORMATION. CLICK HERE >

 

The value of investments may go down as well as up and investors may not get back the full amount invested.

 

Diversification does not guarantee investment returns and does not eliminate the risk of loss.