NVIDIA's Blackwell architecture is the primary driver of Data Center revenue acceleration through FY2026-FY2027. The GB200 NVL72 rack (~$3M ASP) shipped ~28,000 units in CY2025, with the upgraded GB300 NVL72 (Blackwell Ultra) entering mass production in late CY2025 and projected to ship ~60,000 racks in CY2026 (+129% YoY). The GB300 delivers 50% more HBM3e memory (288GB vs 192GB per GPU), 66.7% more dense FP4 compute (15 PFLOPS vs 9 PFLOPS per GPU), and 2x attention-layer acceleration via doubled SFU throughput — critical for the reasoning/inference workloads driving current AI demand.
Production is constrained by two key bottlenecks: TSMC CoWoS-L advanced packaging (sold out through 2026, NVIDIA booking 50-60% of ~1.5M annual wafer capacity) and HBM3e supply (all three vendors sold out through 2026, ~20% price hikes). Blackwell contributed nearly 70% of DC compute sales in FY2026, and 9 GW of Blackwell infrastructure was deployed by Q4 FY2026. The $1T+ backlog through 2027 and transition to Vera Rubin in H2 CY2026 suggest sustained demand but introduce execution risk on annual product cadence..
Growth drivers are evidence-backed
Hyperscaler capex, sovereign AI, and the inference shift are all supported by concrete spending commitments and revenue data, not projections alone.
What is the actual ASP premium for GB300 NVL72 vs GB200 NVL72 given 50% more memory/compute and 17% higher TDP?