Scarcity Alpha — Part 3 — February 2026

High-Bandwidth Memory (HBM)

In the AI era, compute is nothing without memory. Why HBM is the "New Oil" and why SK Hynix, Samsung, and Micron are the only games in town.

Critical Shortage Hardware Datacenter Oligopoly
Scarcity Alpha3/13
Severity Memory Wall Supply vs Demand The Oligopoly Trade Setups Risks Validation

Shortage Severity Assessment

HBM supply is the single most binding constraint for AI training infrastructure today. Every NVIDIA H100 requires 80 GB of HBM3, every H200 requires 141 GB of HBM3E, and every B200 requires 192 GB of HBM3E. The attach rate between GPUs and HBM is 1:1 — you cannot ship an AI accelerator without it. SK Hynix commands 53% of this market. The bottleneck is not silicon; it is memory.

Severity: 9/10
Expected Duration
Short-Med (1–3y)

Capacity is ramping fast, but demand is accelerating faster. HBM3E yields remain challenging at 60–70%. HBM4 transition adds further complexity. Supply-demand balance unlikely before 2028.

Confidence Level
Very High

Every AI accelerator shipped requires HBM. The attach rate is mathematically 1:1. NVIDIA, AMD, and Google TPUs all depend on it. There is no substitute.

HBM Revenue (2025E)
$28B
up from $16B in 2024
HBM as % DRAM Revenue
35%
was 6% in 2022
GPU HBM Attach Rate
100%
no GPU ships without HBM
Capacity Sold Out
18+ months
through end of 2027

What Is HBM and Why It Matters

High-Bandwidth Memory (HBM) is a type of DRAM that stacks multiple memory dies vertically using Through-Silicon Vias (TSVs) — microscopic copper pillars drilled through the silicon wafer that connect the layers. Imagine a skyscraper of memory chips, each floor connected by tiny elevators carrying data at extraordinary speed.

Standard DDR5 RAM in your laptop delivers about 51 GB/s of bandwidth. HBM3E delivers 1,200 GB/s — that is 23 times faster. This is not an incremental improvement; it is a paradigm shift. For AI models with hundreds of billions of parameters, the speed at which you can feed data to the GPU determines everything: training time, inference latency, and total cost of ownership.

The key insight: without HBM, modern AI does not work. Period. A GPU without sufficient memory bandwidth is like a Ferrari engine connected to a garden hose for fuel delivery. The engine might be extraordinary, but the bottleneck determines the actual output.

The Memory Wall

The "Memory Wall" is the fundamental problem driving the HBM shortage. For the past decade, AI model sizes have been growing at approximately 10x per year (GPT-2 had 1.5B parameters in 2019; GPT-4 is estimated at 1.8T parameters; frontier models in 2026 approach 10T). Meanwhile, memory bandwidth has been growing at roughly 2x per year. The gap is widening exponentially.

This is not a temporary imbalance. It is a structural divergence driven by the laws of physics: moving data consumes more energy than computing on it. As models grow, the ratio of memory-bound operations to compute-bound operations tilts relentlessly toward memory. Every major AI lab — OpenAI, Google DeepMind, Anthropic, Meta FAIR — is memory-bandwidth constrained today.

The Bandwidth Gap in Numbers

Training GPT-4-class models requires moving ~2.5 TB of data per second across the memory subsystem of a single node. A standard DDR5 server delivers 0.4 TB/s. An 8-GPU H100 node with HBM3 delivers 26 TB/s. Without HBM, you would need 65 DDR5 channels to match the bandwidth of a single HBM stack. The physics simply do not work without stacked memory.

Memory Technology Comparison

Technology Bandwidth Capacity/Stack Cost/GB (Est.) Layers Primary Use Case
DDR5 51 GB/s 16–64 GB $2–4 1 PCs, general servers
GDDR6X 1,008 GB/s 16–24 GB $8–12 1 Gaming GPUs (RTX 4090)
HBM2E 460 GB/s 16 GB $15–20 8-Hi A100, older AI chips
HBM3 819 GB/s 24 GB $25–30 8-Hi H100 (80 GB total)
HBM3E 1,200 GB/s 24–36 GB $30–40 8–12-Hi H200, B200, MI300X
HBM4 (2026–27) 1,800+ GB/s 48 GB $50–70 (est.) 12–16-Hi Next-gen accelerators (R-series)

Source: JEDEC specifications, TrendForce, company disclosures. Costs are approximate.

Understanding Through-Silicon Vias (TSVs)

The magic of HBM is the vertical stacking. Traditional memory sits on a circuit board next to the processor, connected by traces on the PCB. HBM stacks 8 to 16 layers of DRAM dies on top of each other, connected by Through-Silicon Vias (TSVs) — thousands of tiny copper pillars punched through each layer of silicon.

Think of it like replacing a single-story warehouse (DDR5) with a 12-story skyscraper (HBM3E 12-Hi), where each floor is connected by thousands of high-speed elevators. The footprint on the board is identical, but the capacity and throughput are multiplied by the number of floors. The challenge? Each "elevator" (TSV) must be perfectly aligned across all floors — even microscopic misalignment kills the entire stack. This is why yield rates are so critical and why only three companies on Earth can do it.

Supply vs. Demand: The Structural Deficit

The HBM market is growing at an extraordinary rate. Total demand was approximately 15 exabytes (EB) in 2024, is expected to reach 30 EB in 2025, 45 EB in 2026, and could exceed 80 EB by 2028. Supply, however, is constrained by three factors: DRAM wafer allocation (HBM consumes 3–5x more die area per bit than standard DRAM), advanced packaging capacity (TSV and hybrid bonding lines are bottlenecked), and yield rates (12-Hi stacking at HBM3E has yields below 70%).

The result is a persistent deficit through at least 2027. Even aggressive capacity expansion by all three producers cannot fully close the gap, because every new AI chip generation (B200 → R-series → next-gen) increases the HBM content per accelerator. The treadmill runs faster than the runners.

Source: TrendForce, SK Hynix, Micron Investor Day, Market Watch estimates. EB = Exabytes.

HBM Demand by Application

Application 2024 2025E 2026E 2028E Key Chips
NVIDIA AI GPUs ~65% ~60% ~55% ~50% H100, H200, B200, R-series
AMD AI GPUs ~12% ~15% ~18% ~20% MI300X, MI400
Google TPUs ~10% ~12% ~12% ~12% TPU v5p, Trillium
Custom ASICs + Others ~13% ~13% ~15% ~18% AWS Trainium, MS Maia, etc.

Source: TrendForce, company disclosures. Shares are approximate.

The DRAM Wafer Allocation Problem

Here is the critical constraint most investors miss: every HBM chip produced means fewer standard DRAM chips. HBM uses the same DRAM wafers but consumes 3–5x more die area per gigabyte because of the TSV keep-out zones and smaller die sizes needed for stacking.

When SK Hynix allocates 25% of its DRAM wafer capacity to HBM (up from 5% in 2022), it is removing those wafers from the DDR5/LPDDR5 supply pool. This creates a secondary scarcity effect: tight conventional DRAM supply supports pricing across the entire memory market, not just HBM. It is a rising tide that lifts all boats — for the producers. For buyers, it means higher costs everywhere.

The Oligopoly: Three Companies Control Everything

The HBM market is the tightest oligopoly in the semiconductor industry. Only three companies on Earth can produce it: SK Hynix (South Korea), Samsung (South Korea), and Micron (United States). No Chinese company, no Taiwanese company, and no European company has the capability. This is not expected to change before 2030.

The barriers to entry are extreme. Producing HBM requires mastery of three separate technologies: DRAM fabrication (1-alpha / 1-beta node), TSV processing (drilling, filling, and aligning thousands of vias across 8–16 layers), and advanced packaging (hybrid bonding, MR-MUF or TC-NCF underfill). The capital investment required exceeds $15 billion, and the learning curve takes 3–5 years even with unlimited funding.

Producer Comparison

Producer Market Share Technology HBM3E Yield Key Customer Pricing Power
SK Hynix (000660.KS) ~53% HBM3E 12-Hi (mass prod.)
HBM4 sampling H2 2026
70–80% NVIDIA (primary) Maximum
Samsung (005930.KS) ~38% HBM3E 8-Hi (qualified)
12-Hi qual. issues
50–60% AMD, Google Moderate
Micron (MU) ~9% HBM3E 12-Hi (ramping)
CHIPS Act beneficiary
65–75% NVIDIA (secondary) Growing

Source: TrendForce Q1 2026, company earnings calls, industry checks.

SK Hynix: The Undisputed Leader

18+ months ahead of Samsung on HBM3E qualification with NVIDIA. Pioneered MR-MUF (Mass Reflow Molded Underfill) packaging which gives superior thermal performance. First to deliver HBM3E 12-Hi in volume. Already sampling HBM4 for late 2026. Operating margins on HBM exceed 50%.

Samsung: The Struggling Giant

Despite being the world's largest DRAM maker, Samsung has repeatedly failed NVIDIA's HBM3E qualification tests. Their TC-NCF (Thermo-Compression Non-Conductive Film) packaging has yield issues at 12-Hi stacking. Recently qualified for AMD and Google, but NVIDIA — the largest customer — remains elusive.

Micron: The US Champion

Only US-based producer, making it a strategic national asset. Qualified for HBM3E with NVIDIA. Rapid share gains from 5% to 9% in 18 months. CHIPS Act funding ($6.1B) de-risks capacity expansion. Small share means massive upside potential. Revenue from HBM growing 5x YoY.

Trade Setups

MU (Micron Technology) — The US Champion

Primary Pick

Thesis: Micron is the only US-based memory manufacturer, making it a strategic national asset in the era of reshoring. Qualified for HBM3E with NVIDIA. CHIPS Act funding ($6.1B) de-risks its expansion capex. HBM revenue is growing 5x YoY and is now Micron's highest-margin product. The market still prices MU as a cyclical DRAM company; it is becoming an AI infrastructure play.

Entry Zone
$95–105
EMA 50 support
Stop Loss
$82
Below 200-day EMA
Target 1
$130
Prior resistance
Target 2
$160
ATH retest
R:R Ratio
1:2.8
Risk $18, Reward $50

SK Hynix (000660.KS) — The Market Leader

Core Holding via ETF

Thesis: Undisputed HBM leader with 53% share and 18-month technology lead. Primary NVIDIA supplier. Operating margins on HBM exceed 50%. The challenge for Western investors: SK Hynix trades on the Korea Exchange (KRX). Access via OTC ADR or Korea-focused ETFs. The iShares MSCI South Korea ETF (EWY) has ~25% exposure to semiconductors, with SK Hynix as its largest holding at ~28% weight.

Entry (EWY)
$56–60
ETF proxy entry
Stop Loss
$50
Below 200-day EMA
Target 1
$72
Prior highs
Target 2
$85
2021 ATH zone
R:R Ratio
1:2.5
Risk $8, Reward $20

Samsung (005930.KS) — The Turnaround Bet

Speculative

Thesis: Samsung is the contrarian bet. If they solve their HBM3E 12-Hi yield issues and pass NVIDIA qualification, the stock re-rates massively. Samsung trades at a deep discount to its sum-of-parts. Access via EWY (where Samsung is ~20% weight alongside SK Hynix). This is a higher-risk, higher-reward play for those who believe Samsung's R&D machine will eventually catch up.

Note: We do not set formal entry/stop/target levels for Samsung as a standalone trade due to limited direct access for Western investors. The EWY ETF (above) provides combined SK Hynix + Samsung exposure. For direct Samsung access, consider the Samsung Electronics GDR on the London Stock Exchange (SMSN.L).

Why MU Is the Best Pure-Play HBM Trade

SK Hynix is the better company in HBM, but Micron is the better trade. Why? Three reasons:

1. Liquidity: MU trades on the Nasdaq with $2B+ daily volume. SK Hynix trades on the KRX with limited OTC access for Western investors. Options markets for MU are deep and liquid.

2. Operating leverage: MU has 9% HBM market share growing to 15%+. Every percentage point of share gain is transformative for the P&L. SK Hynix at 53% has less room to surprise to the upside.

3. National security premium: The CHIPS Act, export controls on China, and the push to reshore critical manufacturing all favor the only US-based memory company. This creates a valuation floor that Korean producers do not have.

Reinforcement Signals

  • HBM spot prices rising >5% month-over-month
  • Micron raises HBM revenue guidance above $12B for FY2026
  • Samsung fails another NVIDIA HBM3E qualification round
  • NVIDIA announces HBM content increase for R-series (256+ GB)

Annulation Signals

  • Major hyperscaler cuts AI capex guidance by >15%
  • Samsung passes NVIDIA HBM3E qual., triggering price war
  • DRAM oversupply in conventional segments crushes margins
  • CXL memory adoption accelerates, reducing HBM necessity

Timing & Position Sizing

Horizon: Medium-term (6–18 months). The HBM shortage is structural but capacity is being added. The sweet spot is the next 4–6 quarters before the supply/demand gap begins to narrow.

Key catalysts: Micron earnings (March 2026), SK Hynix earnings (April 2026), NVIDIA GTC announcements, Samsung HBM qualification updates.

Sizing: MU = 3–5% of portfolio (core). EWY = 2–3% (satellite). Total memory exposure should not exceed 8%. Beta to SOXX is ~1.3x.

Scaling in: Enter 50% at initial entry zone. Add 25% on pullback to lower entry bound. Reserve 25% for post-earnings confirmation. Never chase — the DRAM cycle rewards patience.

Risk Analysis

Memory is the most cyclical sector in semiconductors. HBM's structural demand story is real, but it exists within the broader DRAM cycle, which has historically oscillated between feast and famine. Understanding the cycle is essential to timing both entry and exit.

The DRAM Cycle

DRAM has a well-documented 3–4 year boom-bust cycle driven by capacity additions. When all three producers expand simultaneously, oversupply crashes prices. HBM may be partially insulated (contracted volumes, not spot), but conventional DRAM weakness drags overall margins. Risk: moderate.

Technology Leapfrog

Each HBM generation requires a new packaging architecture. HBM4 moves to a fundamentally different design (logic base die). A producer that nails the HBM4 transition could leapfrog the current leader. Samsung's R&D budget ($24B/yr) should not be underestimated. Risk: low-moderate.

China Memory Push (CXMT)

ChangXin Memory Technologies (CXMT) is China's primary DRAM hopeful. Currently producing DDR4/DDR5 at small scale. HBM requires advanced packaging that China lacks (no access to key equipment from Besi, ASM Pacific). Realistic HBM capability: 2030+. Risk: low (long-term threat).

CXL / Alternative Memory

Compute Express Link (CXL) memory enables disaggregated memory pools that could supplement (not replace) HBM. CXL adds capacity but not bandwidth. For AI training, bandwidth is king. CXL is more relevant for inference and general computing. Risk: low for training, moderate for inference.

DRAM Pricing Cycle: Historical Context

Source: DRAMeXchange, TrendForce. Contract prices (8 Gb DDR4/DDR5 equivalent). Indexed to show cyclical nature.

Reading the DRAM Cycle

The DRAM cycle is one of the most predictable patterns in technology investing. It works like this: high prices incentivize all three producers to expand capacity simultaneously. After 18–24 months of construction, new capacity comes online. Supply floods the market. Prices crash 40–60%. Producers cut capex. Supply tightens. Prices recover. Repeat.

The key difference in 2025–2026: HBM is under long-term contracts, not spot pricing. NVIDIA, AMD, and Google sign 1–2 year purchase agreements with fixed pricing. This insulates HBM revenue from the spot-driven conventional DRAM cycle. However, the conventional DRAM cycle still affects overall company margins, stock sentiment, and the wafer allocation decisions that determine HBM supply.

Validation & Invalidation Framework

Our thesis is that HBM shortage severity (9/10) will persist through 2027, benefiting MU and SK Hynix. Here is our scorecard.

Bullish Validation (Thesis Intact)

  • HBM capacity remains sold out 12+ months forward
  • Micron HBM3E yield surpasses 70% and revenue exceeds $10B FY2026
  • NVIDIA next-gen accelerators increase HBM per chip (>192 GB)
  • Samsung continues to fail NVIDIA HBM3E 12-Hi qualification
  • Hyperscaler AI capex guidance maintained or raised (MSFT, GOOG, META, AMZN)
  • HBM contract prices stable or rising QoQ

Bearish Invalidation (Exit or Reduce)

  • Conventional DRAM oversupply crashes contract prices >30%
  • Samsung passes NVIDIA qualification and launches price war
  • Major hyperscaler pauses or cuts AI capex by >15%
  • Alternative architectures (CXL, processing-in-memory) gain real traction
  • China achieves HBM production at scale (CXMT breakthrough)
  • MU fails to maintain NVIDIA qualification or loses share

Catalyst Calendar

Date Event Significance Impact
Mar 2026 Micron Q2 FY2026 Earnings HBM revenue update, margin trajectory, capacity commentary High
Mar 2026 NVIDIA GTC Conference Next-gen architecture details, HBM content per chip High
Apr 2026 Samsung Tech Day HBM roadmap update, yield improvement progress, NVIDIA qual. status Medium
Apr 2026 SK Hynix Q1 2026 Earnings HBM market share data, HBM4 timeline, pricing trends High
H2 2026 HBM4 Mass Production Start Who leads the HBM4 transition determines the next 2 years of share Critical

Key Takeaways

1

HBM is the binding constraint for AI infrastructure. Every GPU, TPU, and custom ASIC requires it. No substitute exists. The shortage severity is 9/10.

2

It is a 3-player oligopoly with extreme barriers to entry. SK Hynix leads. Samsung struggles. Micron is catching up. No new entrant is possible before 2030.

3

Micron (MU) is the best pure-play trade for Western investors. US-listed, liquid options, rising HBM share, CHIPS Act tailwind. Entry zone: $95–105. R:R 1:2.8.

4

Watch the DRAM cycle and Samsung's qualification status. These are the two variables that determine whether the trade works or needs adjustment.

HBM Revenue: An Explosive Growth Market

HBM revenue has gone from a niche segment ($2B in 2022) to the single most profitable product in the semiconductor memory industry. The trajectory is extraordinary: revenue is expected to grow from $16B in 2024 to $28B in 2025 and potentially $45B+ by 2027. HBM now represents over 30% of total DRAM industry revenue while consuming only ~10% of wafer capacity — a testament to its extraordinary pricing power.

Source: TrendForce, SK Hynix, Micron, Samsung earnings reports. 2026E+ are Market Watch estimates.

HBM ASP Premium vs DDR5
5–8x
per gigabyte
SK Hynix HBM Gross Margin
>50%
vs ~35% for conventional DRAM
HBM CAGR (2024–2028)
~38%
revenue compound growth

Why HBM Margins Are So High

Traditional DRAM is a commodity — fungible, interchangeable, and priced on spot markets. HBM is the opposite: each customer (NVIDIA, AMD, Google) requires custom qualification, custom testing, and specific packaging configurations. This turns HBM into a differentiated product with pricing power that conventional DRAM has never enjoyed.

Moreover, HBM is sold under long-term contracts (1–2 years) with fixed pricing, not on the volatile spot market. Customers are willing to pay a significant premium for guaranteed supply. When your $40,000 GPU cannot ship without $3,000 of HBM, you do not negotiate hard on the memory price — you negotiate hard on the allocation.

Disclaimer: This analysis is for educational and informational purposes only. It does not constitute financial advice, investment recommendations, or a solicitation to buy or sell any security. All trade setups are hypothetical and based on technical and fundamental analysis as of February 2026. Past performance is not indicative of future results. Always conduct your own due diligence and consult with a licensed financial advisor before making investment decisions. Market Watch and its authors may hold positions in securities discussed.

Part 2: Semiconductors Series Index Part 4: Uranium

Back to Market Watch  ·  Scarcity Alpha Series  ·  February 2026

Scarcity Alpha3/13