AMD/AMD Data Center (GPUs + EPYC CPUs)

AMD Data Center (GPUs + EPYC CPUs)

$99/share(49% of AMD)anchored
$16.6BFY2025 DC Revenue+32% YoY, 48% of total AMD revenue

AMD's Data Center segment is the company's growth engine, combining two distinct businesses: EPYC server CPUs (the stable cash-flow anchor) and Instinct AI GPUs (the high-growth, high-variance bet). Q4 FY2025 marked a milestone — GPU revenue surpassed CPU revenue within the segment for the first time. The segment's non-GAAP operating margin reached 32.5% in Q4, though GAAP margins remain suppressed by Xilinx acquisition amortization.

32.5%
Q4 GAAP margin
Non-GAAP, up 300bps YoY
41%
EPYC server share
Revenue share, up from <5% eight years ago
20-30%
MI355X vs B200
Inference advantage on specific tasks
$22.9B
FY2026E consensus
+38% YoY analyst consensus

The bull case rests on MI450 execution and the OpenAI/Meta mega-deals converting. Lisa Su targets over 60% DC revenue CAGR and $100B+ annual DC revenue by 2030. The bear case is that Vera Rubin closes AMD's inference cost advantage, ROCm never reaches training parity, and custom ASICs squeeze AMD from below. Growth already decelerated from 94% to 32% in one year.

Why this is anchored, not speculative

The Data Center segment has real, growing revenue — $16.6B in FY2025 and analyst consensus of $22.9B for FY2026. EPYC server CPUs provide a predictable base with 41% server revenue share. The speculative element (GPU share capture beyond current levels) is modeled separately.

The key question

What is the exact breakdown of AMD Data Center revenue between EPYC CPUs and Instinct GPUs? AMD does not disclose this.

Scenario Model$99/share
~5-8%AMD AI GPU Sharevs NVIDIA ~75%. Doubling toward 10% by mid-2026

AMD's GPU competition with NVIDIA is the central investment question. AMD's competitive strategy focuses on inference economics: 25-40% lower cost per token, more HBM per GPU enabling larger model serving, and open-standard networking via UEC. The MI355X outperforms NVIDIA's B200 by 20-30% on specific inference tasks but cannot compete with GB200 NVL72 for training at rack scale. The MI450 (Q3 2026, TSMC 2nm) is co-engineered with OpenAI and represents AMD's most important product launch.

20-30% faster
MI355X vs B200
On specific large-model inference tasks
25-40%
AMD cost advantage
Lower cost per token vs NVIDIA
40-65%
ASIC TCO advantage
Custom ASICs vs GPUs for inference
6M+ devs
CUDA ecosystem
19 years, 300+ acceleration libraries

NVIDIA's Vera Rubin NVL72 (H2 2026) promises a 10x inference cost reduction vs Blackwell, potentially eliminating AMD's cost advantage. Meanwhile, custom ASICs (Google TPU, AWS Trainium, Broadcom XPUs) squeeze AMD from below with 40-65% TCO advantage for inference workloads. AMD must execute on MI450 while navigating this two-front competition.

The $70/share question

Can MI450 match or exceed Vera Rubin on inference cost-per-token? This is the single most important competitive benchmark for H2 2026, and it determines whether the OpenAI/Meta deals fully convert or whether AMD remains a niche inference provider.

Data Center Financial Profile

8 evidence
32.5%DC Operating MarginQ4 non-GAAP. Full-year GAAP 21.7% (suppressed by Xilinx amort.)

AMD's Data Center financial profile reflects a business in transition. GAAP margins are suppressed by approximately $2.3B per year in Xilinx intangible amortization, creating a persistent gap between GAAP (21.7% FY2025) and non-GAAP (32.5% Q4) performance. The GPU/CPU revenue mix is shifting rapidly — Q4 2025 was the first quarter where Instinct GPUs generated more revenue than EPYC CPUs within the segment. At the company level, AMD generated $5.5B in free cash flow on $34.6B revenue, with $8.1B invested in R&D.

$5.5B
FY2025 FCF
Record. 16% of revenue
$8.1B
R&D spend
23% of revenue
$7.4B
Net cash
Total liquid $10.6B, debt $3.2B
57%
Non-GAAP GM
Q4 2025. Full-year 52%

The key tension is between AMD's aggressive GPU pricing strategy (25-40% below NVIDIA) and its path to 57%+ gross margins. Higher GPU mix improves gross margins but aggressive pricing limits upside. The Xilinx amortization headwind fades over time, providing a natural margin tailwind even without mix improvement.

41%EPYC Server Revenue ShareUp from <5% eight years ago. Intel at 13-year lows

EPYC is the cash-flow anchor of AMD's Data Center segment, generating an estimated $9.1B in FY2025 (roughly 55% of segment revenue). The EPYC share gain story is one of the most compelling secular trends in semiconductors: from under 5% to 41% revenue share in eight years, with Intel now at 13-year lows. The 5th-gen Turin (Zen 5, 192 cores) accounts for roughly half of EPYC sales, with 1,600+ cloud instances deployed. The 6th-gen Venice (Zen 6, 256 cores, TSMC 2nm) launches in 2026, doubling memory bandwidth.

~$9.1B
Est. EPYC revenue
~55% of DC segment (analyst est.)
27.8%
Unit share
Revenue share exceeds unit share — premium pricing
1,600+
Cloud instances
EPYC-based, +50% YoY
256
Venice cores
Zen 6, TSMC 2nm, CXL 3.1, launch 2026

AMD targets 50%+ server revenue share by 2026-2027, which appears achievable given Intel's manufacturing struggles. EPYC share gains are more predictable and lower-risk than GPU share capture, providing a steady growth base of 2-3 percentage points per year. The server CPU TAM is also growing as AI infrastructure requires more host CPUs per GPU rack — Venice is positioned as the natural host CPU for MI450 Helios racks.

Open questions

?Can AMD's Helios rack-scale platform win deployments beyond Meta and OpenAI, particularly at AWS, Google, and Microsoft?
?Will AMD's aggressive pricing strategy (25-40% cheaper than NVIDIA) allow gross margins to expand to 57%+ target?
?How much of MI308 China revenue ($390M in Q4) is sustainable vs one-time catch-up shipments?