NVIDIA's AI accelerator market share peaked at ~87% by revenue in 2024 and is projected to decline to 75% by 2026 and 65-70% by 2028-2030, driven primarily by custom ASIC adoption at hyperscalers. The growth rate divergence is stark: TrendForce projects CSP in-house ASICs growing 44.6% in 2026 vs GPUs at 16.1%. ASIC-based AI servers are forecast to reach 27.8% of shipments in 2026 (up from ~7% revenue share in 2024), rising to ~40% by 2030.
However, GPUs still command 69.7% of AI server shipments and will retain 75-81% of revenue through 2028 due to higher ASPs. The unit crossover (ASIC shipments exceeding GPU shipments) may occur in 2026-2027 for specific CSPs like Google (78% of Google's AI servers are already TPU-based), but the revenue crossover is unlikely before 2030+ given the $604B TAM expanding at 16% CAGR (Bloomberg Intelligence). The critical framing: in a market growing from $116B (2024) to $604B (2033), NVIDIA can lose 15-20pp of share while still growing absolute revenue. IDC warns of 15-20% share loss by 2028 from ASIC adoption, while Citi projects GPUs retaining 75% of a $380B market in 2028. The inference segment is the primary battleground where ASICs have 40-65% TCO advantages..
Competitive pressure is real but bounded
Custom ASICs and AMD offer cheaper alternatives for specific workloads, but only a handful of companies can afford multi-billion-dollar chip programs. The competitive threat is structural but limited in scope.
What is the actual ASIC revenue share trajectory year-by-year through 2030? Bloomberg says 19% by 2033, Citi says 25% by 2028, TrendForce says 27.8% of shipments in 2026 — these use different metrics (revenue vs units) making comparison difficult.