AMD's GPU competition with NVIDIA is the central investment question. AMD's competitive strategy focuses on inference economics: 25-40% lower cost per token, more HBM per GPU enabling larger model serving, and open-standard networking via UEC. The MI355X outperforms NVIDIA's B200 by 20-30% on specific inference tasks but cannot compete with GB200 NVL72 for training at rack scale. The MI450 (Q3 2026, TSMC 2nm) is co-engineered with OpenAI and represents AMD's most important product launch.
NVIDIA's Vera Rubin NVL72 (H2 2026) promises a 10x inference cost reduction vs Blackwell, potentially eliminating AMD's cost advantage. Meanwhile, custom ASICs (Google TPU, AWS Trainium, Broadcom XPUs) squeeze AMD from below with 40-65% TCO advantage for inference workloads. AMD must execute on MI450 while navigating this two-front competition.
The $70/share question
Can MI450 match or exceed Vera Rubin on inference cost-per-token? This is the single most important competitive benchmark for H2 2026, and it determines whether the OpenAI/Meta deals fully convert or whether AMD remains a niche inference provider.
Will MI450 match or exceed NVIDIA Vera Rubin on inference cost-per-token? This is the single most important competitive benchmark for H2 2026