AMD/GPU Share Capture Premium

GPU Share Capture Premium

$70/share(35% of AMD)anchored
$70/shareGPU Share Capture Premium35% of equity — the speculative bet on AMD as #2 in AI GPUs

This is the single most important component of AMD's stock price. The market prices an 80% probability that AMD captures meaningful GPU share from NVIDIA's ~75%. If you strip out the Data Center base case and Legacy segments, AMD would trade at roughly $130/share — the remaining $70 is pure GPU share capture optionality. The premium is anchored by two transformational deals: OpenAI (6 GW) and Meta (6 GW), representing combined potential revenue exceeding $200B over 4-5 years.

6 GW
OpenAI deal
Oct 2025. 160M share warrant. First 1 GW binding
6 GW
Meta deal
Feb 2026. Identical structure. Through 2031
~5-8%
Current GPU share
Doubling toward 10% by mid-2026
~20%
Warrant dilution
320M shares at $0.01 if deals fully vest

The structural dynamic is clear: hyperscalers want to reduce NVIDIA dependency, and AMD is the only general-purpose GPU alternative. However, AMD is squeezed between NVIDIA's CUDA moat above (19 years, 6M+ developers) and custom ASICs below (Google TPU, AWS Trainium, Broadcom XPUs offering 40-65% TCO advantage for inference). OpenAI simultaneously signed a 10 GW deal with Broadcom for custom ASICs — AMD is one supplier in a multi-vendor strategy, not the sole alternative.

Why this is speculative

Only the first 1 GW per deal is binding. The remaining 5 GW each is milestone-dependent — if MI450/Helios underperforms, OpenAI and Meta can shift to NVIDIA or custom ASICs without penalty. The entire $70/share premium depends on execution in H2 2026 and beyond.

The key question

Will OpenAI's Titan custom ASIC (with Broadcom) reduce the long-term value of the MI450 deal, or do they serve different workloads?

Scenario Model$70/share
6 GWOpenAI Deal SizeOct 2025. Potential $90-100B+ revenue. First 1 GW binding

The AMD-OpenAI partnership is the single most important catalyst in AMD's investment thesis. The world's leading AI company chose AMD for its next-generation infrastructure, validating AMD as a credible #2 GPU vendor at the highest level. The MI450 is co-engineered with OpenAI input, making this a deep technical partnership, not just a procurement deal. Meta signed an identical structure one month later, confirming the pattern.

1 GW
Binding commitment
H2 2026. Remaining 5 GW milestone-dependent
160M shares
Warrant dilution
~10% at $0.01/share. Final tranche at $600
6 GW
Meta deal
Feb 2026. Identical structure. Through 2031
320M shares
Combined dilution
~20% if fully vested

The critical nuance: OpenAI simultaneously signed a 10 GW deal with Broadcom for custom 'Titan' ASICs. AMD is one supplier in OpenAI's multi-vendor strategy (NVIDIA + AMD + custom ASIC), not the sole NVIDIA replacement. And only the first 1 GW is binding — the remaining 5 GW depends on MI450 execution meeting OpenAI's performance benchmarks.

The binding vs optional question

Only the first 1 GW per deal is binding. If MI450/Helios benchmarks disappoint or Vera Rubin outperforms, OpenAI and Meta can shift remaining budget to NVIDIA or custom ASICs without penalty. The deal is simultaneously the strongest bull evidence and the biggest execution risk.

10-30%ROCm vs CUDA GapBehind CUDA on compute-intensive training. Narrowing for inference

ROCm is the software gatekeeper to AMD's GPU ambitions. ROCm 7.0 delivered a 3.5x inference performance improvement over v6, and MI355X hardware benchmarks show competitive or better performance than NVIDIA B200 on specific workloads. But the CUDA gap remains real for training: 10-30% behind on compute-intensive workloads, requiring more manual optimization. The critical question is whether hardware can permanently compensate for software gaps.

3.5x
ROCm 7.0 gains
Inference improvement vs v6
8 of 10
Top AI companies on AMD
Running production workloads
100%
Meta Llama 405B
Live inference runs on MI300X
+340% YoY
JAX job postings
vs CUDA +12%. Ecosystem shifting

Open-source compilers are gradually eroding CUDA's lock-in. Triton (OpenAI) enables hardware-agnostic development, PyTorch 2.0 torch.compile reduces CUDA-specific needs, and JAX supports AMD GPUs natively. However, NVIDIA is responding — CUDA Tile IR open-sourcing incorporates MLIR/LLVM, potentially making it harder for AMD to differentiate. The ecosystem battle is far from won.

Open questions

?Can AMD win a third hyperscaler deal (Google, AWS, or Microsoft) to prove GPU adoption is a broad trend, not a two-customer story?
?At what GPU market share does AMD achieve enough software ecosystem momentum for a self-sustaining flywheel?
?Does the warrant dilution (potentially 20%) offset the revenue value of the deals, especially if AMD stock doesn't reach $600?