This is the single most important component of AMD's stock price. The market prices an 80% probability that AMD captures meaningful GPU share from NVIDIA's ~75%. If you strip out the Data Center base case and Legacy segments, AMD would trade at roughly $130/share — the remaining $70 is pure GPU share capture optionality. The premium is anchored by two transformational deals: OpenAI (6 GW) and Meta (6 GW), representing combined potential revenue exceeding $200B over 4-5 years.
The structural dynamic is clear: hyperscalers want to reduce NVIDIA dependency, and AMD is the only general-purpose GPU alternative. However, AMD is squeezed between NVIDIA's CUDA moat above (19 years, 6M+ developers) and custom ASICs below (Google TPU, AWS Trainium, Broadcom XPUs offering 40-65% TCO advantage for inference). OpenAI simultaneously signed a 10 GW deal with Broadcom for custom ASICs — AMD is one supplier in a multi-vendor strategy, not the sole alternative.
Why this is speculative
Only the first 1 GW per deal is binding. The remaining 5 GW each is milestone-dependent — if MI450/Helios underperforms, OpenAI and Meta can shift to NVIDIA or custom ASICs without penalty. The entire $70/share premium depends on execution in H2 2026 and beyond.
Will OpenAI's Titan custom ASIC (with Broadcom) reduce the long-term value of the MI450 deal, or do they serve different workloads?
The AMD-OpenAI partnership is the single most important catalyst in AMD's investment thesis. The world's leading AI company chose AMD for its next-generation infrastructure, validating AMD as a credible #2 GPU vendor at the highest level. The MI450 is co-engineered with OpenAI input, making this a deep technical partnership, not just a procurement deal. Meta signed an identical structure one month later, confirming the pattern.
The critical nuance: OpenAI simultaneously signed a 10 GW deal with Broadcom for custom 'Titan' ASICs. AMD is one supplier in OpenAI's multi-vendor strategy (NVIDIA + AMD + custom ASIC), not the sole NVIDIA replacement. And only the first 1 GW is binding — the remaining 5 GW depends on MI450 execution meeting OpenAI's performance benchmarks.
The binding vs optional question
Only the first 1 GW per deal is binding. If MI450/Helios benchmarks disappoint or Vera Rubin outperforms, OpenAI and Meta can shift remaining budget to NVIDIA or custom ASICs without penalty. The deal is simultaneously the strongest bull evidence and the biggest execution risk.
ROCm is the software gatekeeper to AMD's GPU ambitions. ROCm 7.0 delivered a 3.5x inference performance improvement over v6, and MI355X hardware benchmarks show competitive or better performance than NVIDIA B200 on specific workloads. But the CUDA gap remains real for training: 10-30% behind on compute-intensive workloads, requiring more manual optimization. The critical question is whether hardware can permanently compensate for software gaps.
Open-source compilers are gradually eroding CUDA's lock-in. Triton (OpenAI) enables hardware-agnostic development, PyTorch 2.0 torch.compile reduces CUDA-specific needs, and JAX supports AMD GPUs natively. However, NVIDIA is responding — CUDA Tile IR open-sourcing incorporates MLIR/LLVM, potentially making it harder for AMD to differentiate. The ecosystem battle is far from won.