AMD's Data Center segment is the company's growth engine, combining two distinct businesses: EPYC server CPUs (the stable cash-flow anchor) and Instinct AI GPUs (the high-growth, high-variance bet). Q4 FY2025 marked a milestone — GPU revenue surpassed CPU revenue within the segment for the first time. The segment's non-GAAP operating margin reached 32.5% in Q4, though GAAP margins remain suppressed by Xilinx acquisition amortization.
The bull case rests on MI450 execution and the OpenAI/Meta mega-deals converting. Lisa Su targets over 60% DC revenue CAGR and $100B+ annual DC revenue by 2030. The bear case is that Vera Rubin closes AMD's inference cost advantage, ROCm never reaches training parity, and custom ASICs squeeze AMD from below. Growth already decelerated from 94% to 32% in one year.
Why this is anchored, not speculative
The Data Center segment has real, growing revenue — $16.6B in FY2025 and analyst consensus of $22.9B for FY2026. EPYC server CPUs provide a predictable base with 41% server revenue share. The speculative element (GPU share capture beyond current levels) is modeled separately.
What is the exact breakdown of AMD Data Center revenue between EPYC CPUs and Instinct GPUs? AMD does not disclose this.
AMD's GPU competition with NVIDIA is the central investment question. AMD's competitive strategy focuses on inference economics: 25-40% lower cost per token, more HBM per GPU enabling larger model serving, and open-standard networking via UEC. The MI355X outperforms NVIDIA's B200 by 20-30% on specific inference tasks but cannot compete with GB200 NVL72 for training at rack scale. The MI450 (Q3 2026, TSMC 2nm) is co-engineered with OpenAI and represents AMD's most important product launch.
NVIDIA's Vera Rubin NVL72 (H2 2026) promises a 10x inference cost reduction vs Blackwell, potentially eliminating AMD's cost advantage. Meanwhile, custom ASICs (Google TPU, AWS Trainium, Broadcom XPUs) squeeze AMD from below with 40-65% TCO advantage for inference workloads. AMD must execute on MI450 while navigating this two-front competition.
The $70/share question
Can MI450 match or exceed Vera Rubin on inference cost-per-token? This is the single most important competitive benchmark for H2 2026, and it determines whether the OpenAI/Meta deals fully convert or whether AMD remains a niche inference provider.
AMD's Data Center financial profile reflects a business in transition. GAAP margins are suppressed by approximately $2.3B per year in Xilinx intangible amortization, creating a persistent gap between GAAP (21.7% FY2025) and non-GAAP (32.5% Q4) performance. The GPU/CPU revenue mix is shifting rapidly — Q4 2025 was the first quarter where Instinct GPUs generated more revenue than EPYC CPUs within the segment. At the company level, AMD generated $5.5B in free cash flow on $34.6B revenue, with $8.1B invested in R&D.
The key tension is between AMD's aggressive GPU pricing strategy (25-40% below NVIDIA) and its path to 57%+ gross margins. Higher GPU mix improves gross margins but aggressive pricing limits upside. The Xilinx amortization headwind fades over time, providing a natural margin tailwind even without mix improvement.
EPYC is the cash-flow anchor of AMD's Data Center segment, generating an estimated $9.1B in FY2025 (roughly 55% of segment revenue). The EPYC share gain story is one of the most compelling secular trends in semiconductors: from under 5% to 41% revenue share in eight years, with Intel now at 13-year lows. The 5th-gen Turin (Zen 5, 192 cores) accounts for roughly half of EPYC sales, with 1,600+ cloud instances deployed. The 6th-gen Venice (Zen 6, 256 cores, TSMC 2nm) launches in 2026, doubling memory bandwidth.
AMD targets 50%+ server revenue share by 2026-2027, which appears achievable given Intel's manufacturing struggles. EPYC share gains are more predictable and lower-risk than GPU share capture, providing a steady growth base of 2-3 percentage points per year. The server CPU TAM is also growing as AI infrastructure requires more host CPUs per GPU rack — Venice is positioned as the natural host CPU for MI450 Helios racks.