In brief:
- The 1Q26 earnings season is the strongest in a while. 85% of companies have beaten expectations, the most since 2Q21 and well above the long-term average of 73%.
- The AI buildout is supporting earnings growth across sectors. Banks are benefitting from AI-driven capital markets activity, and demand for electrical equipment is boosting industrials.
- Earnings growth has followed AI capex out of the hyperscalers and into the semi industry. Hyperscaler earnings grew only 8% y/y in 1Q26 vs. 97% for semis. Remarkably, operating margins are stable thanks to efficiency improvements like custom AI chips.
- The datacenter buildout isn’t just benefitting AI chip makers. Demand is surging for CPUs and memory, creating markets for new, higher-margin products optimized for AI. Further down the supply chain, chip manufacturers and their suppliers are also seeing strong earnings growth.
- Semis cannot continue outgrowing the hyperscalers forever. Ultimately, the success of the AI ecosystem depends on monetizing consumer and enterprise adoption.
1Q26 earnings are the strongest in a while
Equity markets are back to the future. News about peace talks was enough to propel the S&P 500 out of a 9% drawdown and right back to all-time highs. Markets are once again all about AI, with the tech sectors driving 45% of the gains. The rally has been further supported by the strongest earnings season in a while. 85% of companies have beaten expectations, the most since 2Q21 and well above the long-term average of 73%. Tech has of course been the highlight, but results in every sector have come in meaningfully above expectations:
- Financials are on track to grow earnings by 21% y/y, thanks to elevated capital markets activity. Despite the geopolitical shock, the value of global M&A announcements in 1Q26 was the second highest ever, trailing only 4Q25, boosting investment banking revenues. Large-scale, strategic M&A held up the best as CEOs look through the conflict to position their businesses for AI.
- Consumer companies aren’t seeing materially weaker demand. It’s still early, but according to Chase credit card data, consumer spending accelerated in 1Q26 and remained strong through March, even in discretionary categories like retail, entertainment and travel. The commodity shock will increase costs, but hedging and longer-term contracts with suppliers should prevent an acute shock.
- Industrials earnings are growing by 19.3% y/y with strength across several industries. Commercial loan growth is picking up at banks, suggesting the conflict hasn’t yet stopped companies from investing in their businesses. The datacenter buildout is driving earnings growth for electrical component makers, and airlines have continued ordering new jets, supporting the aerospace & defense industry.
Hyperscalers are hanging in
Markets have moved on from the Mag 7. They’re still all about AI, but understanding of the beneficiaries is evolving. First it was the Mag 7, then the hyperscalers and now leadership has followed capex down the AI supply chain.1 Semis are up 24% YTD, outperforming the hyperscalers by 19%. Earnings growth too is much stronger, with semi EPS rising 97% y/y vs. only 8% for the hyperscalers, and the dispersion is expected to continue.2
But semi success is entirely dependent on the hyperscalers themselves, who account for an estimated 22% of semi sales. Luckily, it doesn’t seem like they’ll run out of money anytime soon. This quarter, hyperscalers raked in a combined $430bn in revenues, over 2x that of the semi industry. Cloud sales grew an average of 44%, agentic AI subscriptions are in the millions and ad prices are up double digits thanks to AI recommendation algorithms.
AWS growth was the fastest it’s been in 15 quarters, and after three years of AI, it’s 260x larger than after three years of the cloud. The CEO described, “It's very unusual for a business to grow this fast on a base this large…the last time we saw growth at this clip, AWS was roughly half the size. We've never seen a technology grow as rapidly as AI.”
Microsoft’s AI business has surpassed $37bn annual recurring revenue, up 123% y/y, and larger than 77% of S&P 500 companies. The cloud business is doing the heavy lifting, but AI products are starting to pull some weight. Copilot had a record quarter with subscriptions up 250% y/y, the highest growth since launch, and queries up almost 20% q/q. “To put this momentum in perspective, weekly engagement is now at the same level as outlook as more and more users make Copilot a habit.”
But all that growth comes at a price. After spending a combined $715bn on capex from 2023 through 2025, the hyperscalers are on track to spend $629bn in 2026 alone, already up 35% from the January 1st estimate. Capex isn’t just increasing because the hyperscalers are building more compute; the AI demand frenzy is also driving up prices down the supply chain.
Microsoft disclosed that of the $190bn they expect to spend on capex in 2026, $25bn, or 13%, is from higher prices, not more compute. Similarly, Meta increased guidance by $10bn to $125-$145bn. Management summarized, “Most of that is due to higher component costs, particularly memory pricing. But every sign that we're seeing in our own work and across the industry gives us confidence in this investment.”
Remarkably, profit margins are holding steady as the hyperscalers remain hyper focused on efficiency. But this will become increasingly difficult as capex starts to accumulate in depreciation.
AI is a lot more than GPUs
All this capex is driving picks & shovels investment opportunities. About half the cost of a datacenter is from AI chips, but the other half is driving earnings growth across industries. GPUs get all the attention, but CPUs are the brains behind the operation, assigning tasks, reading and writing to memory and sending data around the server. And both GPUs and CPUs need memory. CPUs store information in NAND, or long-term memory, and use DRAM, or working memory, to perform tasks. For example, HBM, or high bandwidth memory, is a special kind of DRAM used to feed GPUs data fast enough to keep them busy.
It doesn’t stop there. AI, CPU and memory chips are packaged together into AI servers, and thousands of these servers, along with networking, electrical and cooling equipment, make up a data center. The compute buildout is driving demand for these older tech components and creating new ones altogether. As GPUs get more powerful, so too must CPUs, memory and everything else in the system.
AI chips
Rather than forever ceding margins, hyperscalers are trying to cut down on their NVIDIA bill by designing custom chips optimized for their AI systems.3
Microsoft and Amazon reported their latest chips increase tokens per dollar by 30%+ compared to an NVIDIA GPU, and Amazon’s CEO outlined the higher level impact: “At scale, we expect Trainium will save us tens of billions of dollars of CapEx each year and provide several hundred basis points of operating margin advantage versus relying on other chips for inference.”
Still, NVIDIA isn’t going anywhere anytime soon. ASICs are designed to perform specific tasks more cheaply, but only NVIDIA chips can handle any AI workload. Custom hardware is a safe and efficient bet for known workloads like inference, but there’s obsolescence risk as AI evolves. Moreover, every engineer and every model were trained on NVIDIA’s ecosystem, including CUDA, the software layer between the chips and the code that contains thousands of prebuilt libraries for splitting workloads among chips, managing memory and debugging.
Today, 26% of hyperscaler capex goes straight to NVIDIA’s bottom line. Their mission to reduce that dependency is driving earnings growth in other parts of the semi industry, as hyperscalers don’t have the capabilities to design or produce custom chips in-house. Analysts estimate the ASICs market will hit $70bn this year and grow at a 30%+ CAGR.4
Last quarter, earnings in Broadcom’s custom chip business grew 140%, thanks to their hyperscaler and model customers. In the words of the CEO, “Let me take a second to emphasize our collaboration with these six customers to develop AI XPUs as deep, strategic, and multi-year. We bring…unmatched technology in service, silicon design, process technology, advanced packaging and networking, to enable each of these customers to achieve optimal performance for their differentiated LLM workloads.”
This quarter, Amazon and Google are taking ASICs even further, announcing plans to ship their custom chips directly to customers. In addition to revenue, producing at scale will bring down supplier costs and spread the fixed R&D expense over more chips.
CPUs are critical for agentic AI workloads
AI models need GPUs, but AI products need CPUs. The training process is computationally heavy but highly repetitive, so the ideal CPU to GPU ratio is around 1:8. But inference, especially agentic inference, is messier. Requests need to be processed, broken into steps, distributed among chips and paired with the right data, retrieved from the right memory. All of which is handled by the CPU. Current AI applications have a CPU to GPU ratio of 1:3 or 1:4, but the ideal ratio for agentic AI workloads could be as high as 7:1.5
Arm, a UK company that designs and licenses CPU technology to customers, including NVIDIA and several hyperscalers, recently estimated the total addressable market for data center CPUs could grow from $50bn today to $100bn by 2030.6 In addition to traditional suppliers like Intel, NVIDIA and every hyperscaler are also chasing that market with custom CPUs for AI.
Intel shocked markets with 157% y/y EPS growth, driven by CPU demand. The CEO stated, “For the last few years, the story around high-performance computing was almost exclusively about GPU and other accelerators. In recent months, we have seen clear signs that the CPU is reinserting itself as the indispensable foundation of the AI era.”
The CEO of Amazon stated, “AI is commonly seen as a GPU story, but the rise of agentic workloads, real-time reasoning, code generation, reinforcement learning and multi-step task orchestration is driving massive CPU demand as well. As AI systems shift from answering questions to taking actions, and as post-training and inference scale up, the compute required pulls heavily on CPUs”. Similarly, Microsoft announced their Cobalt server CPU is deployed in nearly half their datacenter regions and that they’re significantly increasing supply to meet demand.
Memory is a strategic bottleneck
AI demand is driving an exponential increase in memory prices, with DRAM and NAND up around 300% and 200% y/y.7 It takes years to add manufacturing capacity, and HBM requires more than traditional DRAM. EPS growth is in the triple digits at each of the three major memory makers, SKHynix and Samsung in Korea, along with Micron in the U.S. Micron now earns more in a single quarter than it did in any year before 2025.
Before the AI frenzy, memory markets typically made 1-year contracts with customers. This quarter, Micron signed a five-year “strategic customer agreement,” which should improve earnings stability, though perhaps at the expense of maximizing short-term profits.
These agreements will primarily benefit hyperscalers by keeping their capex costs down. As the only companies with the scale and resources to secure capacity, they’re also winning new business. Amazon’s CEO pointed out, “…the change in price and in supply on things like memory is…pushing companies who have on-premises infrastructure into the cloud…the suppliers are prioritizing their very largest customers, which are cloud providers. And so we have seen a number of conversations we've been having with enterprises for many months where it's just been slower in getting the transformation plan to move to the cloud, accelerate rapidly, just because we have a lot more supply than what others have.”
AI isn’t the only thing that needs memory. It’s everywhere in everyday electronics like smartphones, computers, tablets, gaming consoles and smart TVs. These traditional electronics manufacturers are paying higher memory prices too. Some of it’s coming out of margins, but PC prices are already up around 20%, and analysts are expecting the average selling price of a smartphone to increase by 10% in 2026. These higher prices will also mute demand, particularly at the cheaper end of the market, with analysts lowering 2026 forecasts for PC shipments by 9% and smartphones by 11%.8,9
Semicaps are picks & shovels to the picks & shovels
What do NVIDIA’s and hyperscalers’ GPUs and CPUs all have in common? They’re all made by TSMC. While competition at the chip level is increasing, manufacturing is a monopoly. NVIDIA, Broadcom and AMD spend 34% of their COGS at TSMC, up from 31% last year.
Modern AI chips are tens of billions of transistors packed into a piece of silicon roughly the size of a postage stamp. Manufacturing them requires executing hundreds if not thousands of steps with almost zero room for error, including deposition, etching, lithography, cleaning, inspection, metrology, packaging and testing.10 As founder Morris Chang argues, semi manufacturing can only be learned through doing, and that learning needs to be concentrated, repeated and refined over decades.11 Just like models are designed around NVIDIA’s chips, chips are designed around TSMC’s manufacturing.
This quarter, TSMC reported 1Q26 operating margins of 58%, up 10 percentage points from last quarter, compared to 66% at NVIDIA. Management also expects capex to come in at the high end of the $52-$56bn guidance as supply remains tight. This expansion is driving earnings growth even further down the supply chain in the semicap industry:
- Lithography is a monopoly itself. ASML is the only company capable of producing EUV machines, which print the smallest and most complex circuit patterns on advanced chips. This used to just be necessary for cutting-edge AI chips, but HBM has become so complex it too now requires EUV technology.
ASML’s sales grew 46% y/y, with the newer memory business driving a slight majority. Management summarized, “Both our Memory and Logic customers are responding to this unprecedented demand by increasing capital expenditures and accelerating capacity expansion plans this year and beyond. Those investments are supported by long-term agreements with their own customers. In addition to expanding capacity, both advanced DRAM and Logic customers continue to further adopt...”
- Equipment is advancing. Companies that supply the machines used in chip production are seeing accelerating earnings growth. Tools for deposition, etching and cleaning are advancing along with AI chips, becoming both more critical and more complex.
Lam Research grew EPS by 38% y/y in 1Q26. Management cited “robust growth in investments across all three device segments led by DRAM and leading-edge foundry-logic.” The CEO stated, “All indications are that we are still in the early stages of the AI buildout and end markets are signaling a strong appetite for greater compute and storage capability at both the device and package level.”
- Inspection too is more important. As chips get more complex, the margin for error in manufacturing shrinks, driving demand for metrology and inspection tools. Chip makers are also adding lots of new capacity, and newer labs require more process control.
KLA’s management is projecting $140bn+ WFE capacity will come online in 2026, followed by even more in 2027. The CFO highlighted “The growing investment in custom silicon, particularly among hyperscalers developing their own custom chips, have led to a proliferation of new higher-value design starts and increased demand on our customers to deliver performance, volume, and time to market.”
Investment implications
- Over the past two years, semiconductor earnings have been growing at the expense of hyperscalers’. AI capex is putting pressure on their margins, but it’s driving earnings growth and investment opportunities down the supply chain.
- Hyperscalers are designing custom AI chips to reduce prices, creating opportunities for design partners, fabs and semicap equipment suppliers.
- AI is driving up demand for memory and CPU chips, creating higher margin products and inflating the cost of traditional electronics like laptops and cell phones.
- Meaningful parts of the semi supply chain are located outside of the U.S. These picks and shovels plays trade at a discount to their U.S. peers and can provide much needed geographic diversification.
- Higher prices are increasing capex, but hyperscalers are also getting more efficient, which should bring down the cost of compute over time.
- Strong earnings growth in the semi industry can only continue if hyperscalers profit from their investments. Semi companies have been out-earning and out-performing, but hyperscalers need to capture more value in the long-run, or else their investments, and consequently semi revenues, will drop.
- The entire ecosystem ultimately rests on monetizing demand from both consumers and corporations.
1 The Magnificent 7 (Mag 7) is a market weighted composite of AAPL, AMZN, GOOGL/GOOG, META, MSFT, NVDA and TSLA. The hyperscalers are a market weighted composite of AMZN, GOOGL/GOOG, META and MSFT.
2 1Q26 EPS numbers were adjusted for AMZN, GOOG/GOOGL and META. GOOG/GOOGL and AMZN recognized unrealized from marking up equity investments, and META had a one-time increase in the value of deferred tax assets related to changes in the OBBBA’s treatment of R&D expensing.
3 GPUs and AI ASICs are both chips used for AI workloads, but they are not the same thing. GPUs are flexible, general-purpose AI chips, led by NVIDIA and AMD; ASICs are custom chips built for a narrower set of workloads, often called custom silicon, AI accelerators, XPUs or in-house chips. Major hyperscaler examples include Google’s TPUs, Amazon’s Trainium and Inferentia, Microsoft’s Maia and Meta’s MTIA.
4 J.P. Morgan North America Equity Research. “2026 March Madness: Semiconductors, Semiconductor Capital Equipment & Chip Design Software (EDA).” Harlan Sur, Peter Peng and Mayur Ramdhani. March 27, 2026.
5 J.P. Morgan North American Equity Research. “Hardware & Networking: C1Q26 Preview: AI Premiums Concentrated to Few, Leading Us to Expect Rotation.” Samik Chatterjee, Joseph Cardoso, Manmohanpreet Singh and Marc Vitenzon. April 16, 2026.
6 Arm Investor Relations. Arm Everywhere: Investor Session. March 24, 2026.
7 Data are from inSpectrum Tech. DRAM pricing reflects the flash contract price for DDR5 64GB RDIMM, and NAND pricing is the flash contract price for TLC 512GB SSDs.
8 J.P. Morgan Equity Research. “PCs and Servers: AI and CSP general server strength drives components pricing pressure for PCs.” Albert Hung, Gokul Hariharan, Samik Chatterjee and Anthony Leng. January 12, 2026.
9 J.P. Morgan North American Equity Research. “Hardware & Networking: Global Smartphone Model.” Smik Chatterjee, Joseph Cardoso, Manmohanpreet Singh, Marc Vitenzon and Gokul Hariharan. March 9, 2026.
10 Deposition, etching, lithography, cleaning, inspection, packaging, and testing are major steps in making semiconductors. Deposition adds ultra-thin layers of material to a silicon wafer; lithography uses light to print the chip’s circuit pattern; etching removes material where the pattern says it should be removed; cleaning clears away particles and residues between steps; metrology measures whether each layer was built to the right dimensions and alignment; inspection checks for defects and measures whether the pattern was built correctly; packaging connects the finished chip so it can be used in a device; and testing confirms the chip works as designed before shipment.
11 MIT News. “Morris Chang ’52, SM ’53 describes the secrets of semiconductor success.” Peter Dizikes. October 25, 2023.
By Meera Pandit and Katie Korngiebel - May 6, 2026
f66d6cca-4753-11f1-97fc-a51e924e2823
The firms highlighted above have been selected based on their significance and are shown for illustrative purposes only. They are not recommendations.