NVDA/dc_gpu/Broadcom Custom ASICs: Primary NVIDIA Competitive Threat Vector

Broadcom Custom ASICs: Primary NVIDIA Competitive Threat Vector

+106%YoY Growth+106% YoY

Broadcom is the dominant custom AI ASIC design partner, commanding ~60% of the custom AI chip market with 6 confirmed hyperscaler XPU customers (Google, Meta, ByteDance, Anthropic, OpenAI, plus one undisclosed). Its AI revenue has doubled annually from $12.2B (FY2024) to $20.2B (FY2025), with Q1 FY2026 at $8.4B (+106% YoY) and Q2 guided at $10.7B, implying a ~$40B FY2026 run-rate. CEO Hock Tan has stated 'line of sight to AI chip revenue in excess of $100 billion in FY2027.' Total AI backlog stands at $73B over 18 months (as of Q4 FY2025), with ~$53B in custom XPU accelerators and ~$20B in AI networking silicon.

$8.4B
Broadcom Q1 FY2026 Earnings Release
Broadcom Q1 FY2026 AI revenue was $8.4B, up 106% YoY; XPU accelerators grew 140%...
$10.7B
Broadcom Q1 FY2026 Earnings Release — Fo
Broadcom Q2 FY2026 AI semiconductor revenue guided at $10.7B; total semiconducto...
$12.2B
Broadcom earnings releases FY2024-FY2026
Broadcom AI revenue has doubled approximately annually: FY2024 $12.2B (+220% YoY...
$100
Broadcom Q1 FY2026 Earnings Call
CEO Hock Tan stated 'We have line of sight to achieve AI revenue from chips, jus...

Broadcom's 3.5D XDSiP packaging -- integrating 6,000+ mm2 of silicon with up to 12 HBM stacks using face-to-face die stacking -- provides a structural design advantage over competitors Marvell and Alchip, with 7x signal density and 10x power reduction in die-to-die interfaces vs. conventional face-to-back approaches. The most significant recent deal is OpenAI's 'Titan' custom ASIC collaboration: 10 GW of custom accelerators on TSMC 3nm with Samsung HBM4, deployment starting H2 2026 through end-2029, estimated at $150-200B over multiple years. For NVIDIA, Broadcom represents the primary channel through which custom silicon threatens GPU market share: every Broadcom XPU design win at a hyperscaler is direct GPU wallet share displacement, particularly in inference where ASICs offer 40-65% TCO advantages. However, key mitigating factors include: (1) Google concentration risk (HSBC estimates 78% of Broadcom ASIC revenue from Google), (2) ASICs lack GPU flexibility for rapidly-evolving training workloads, (3) 2-3 year design cycles create lag vs NVIDIA's annual GPU cadence, and (4) Broadcom is absent from NVIDIA's NVLink Fusion ecosystem, instead backing the slower-moving UALink consortium..

Competitive pressure is real but bounded

Custom ASICs and AMD offer cheaper alternatives for specific workloads, but only a handful of companies can afford multi-billion-dollar chip programs. The competitive threat is structural but limited in scope.

The key question

What is Broadcom's actual per-customer AI revenue breakdown? HSBC's 78% Google concentration estimate needs verification from Broadcom's own disclosures.

Open questions

?Can OpenAI's Titan ASIC achieve the claimed 90% inference cost reduction vs GPUs at scale? If so, what fraction of OpenAI's inference fleet transitions from NVIDIA to Titan by 2028?
?Will MediaTek's reported win on Google TPU v7e/v8e SerDes design materially erode Broadcom's Google relationship, or is it limited to one component?
?How does TSMC CoWoS capacity allocation between NVIDIA (Blackwell/Vera Rubin) and Broadcom (XPUs) constrain Broadcom's growth trajectory?
?Is the $100B FY2027 AI chip revenue target achievable given TSMC packaging constraints and the 2-3 year design cycle for new customers?