AVGO/AI Semiconductor / XPUs — The Growth Engine

AI Semiconductor / XPUs — The Growth Engine

$241/share(80% of AVGO)anchored

The key question

Is Broadcom the structural winner in custom silicon — or a highly-concentrated supplier to Google?

$240.8/shAI Semi Contribution80% of equity value, 5-scenario DCF model

Broadcom's AI semiconductor business is the dominant value driver and the fastest-growing segment in the company. Custom XPU accelerators designed in deep partnership with hyperscalers (Google since 2014, Meta, ByteDance, OpenAI) plus networking silicon (Tomahawk switches, SerDes) for AI data centers are accelerating at a pace that prompted CEO Hock Tan to declare 'line of sight' to over $100B in annual AI chip revenue by 2027.

$8.4B
Q1 FY2026 AI Revenue
+106% YoY, XPU accelerators +140% YoY
$10.7B
Q2 FY2026 Guide
AI networking growing to 40% of AI revenue
~60%
Custom ASIC Share
vs Marvell ~35%, Alchip + others remainder
~90%
DC Switch Share
Tomahawk + Jericho families, Tomahawk 6 at 102 Tbps

The bull case rests on ASIC platform economics: each new hyperscaler customer validates the model, and Broadcom's 3.5D XDSiP packaging advantage (6,000mm2 silicon, 12 HBM stacks) creates a structural lead over competitors. The bear case centers on GPU recapture -- NVIDIA's Vera Rubin could deliver step-function inference cost improvements that make custom ASICs less compelling. Customer concentration is the wild card: an estimated 78% of ASIC revenue comes from Google alone.

Google concentration is the single biggest risk

If HSBC's estimate is correct that 78% of ASIC revenue comes from Google, and if MediaTek's reported win on TPU v7e/v8e SerDes signals a broader diversification by Google away from Broadcom, the revenue impact would be severe. The $100B target implicitly requires all 6 customers to scale, not just Google.

Broadcom XPU Customer Overview
CustomerChipStatusOrder ScaleConcentration Risk
GoogleTPU v7p IronwoodIn productionB follow-on Q4 FY2025~78% of ASIC revenue (HSBC)
MetaMTIA (gen 3-5)ShippingConfirmed; scale undisclosedCo-developed with Broadcom
Apple"Baltra" inference serverDevelopmentPre-revenue; mass production TBDStrategic but unconfirmed revenue
ByteDanceCustom AI acceleratorShippingConfirmed; scale undisclosedGeopolitical export control risk
AnthropicVia Google TPU ordersActiveB in H2 FY2025 ordersIndirect — goes through Google
OpenAIFirst-gen XPULate 2026 production10 GW multi-year dealNew; adds diversification
Custom ASIC vs GPU: Key Tradeoffs
DimensionBroadcom XPU (ASIC)NVIDIA GPU
Inference TCO at hyperscaler scale40-65% advantageBaseline
Design cycle2-3 years, workload-specificAnnual cadence (Hopper → Blackwell → Rubin)
FlexibilitySingle workload optimizedUniversal — adapts to new models
Software ecosystemCustomer-specificCUDA: 7.5M developers
Who benefitsHyperscalers with defined workloads at scaleEnterprises, research, startups — all sizes
NVIDIA responseVera Rubin: targets ASIC TCO parity for inferenceOngoing competitive response

AI Semi Scenario Analysis

bull (66%)base (13%)bear (20%)
MarginsFabless model sustains 65%+ adjusted EBITDAHolds at industry leading levelsCompress as NVIDIA competition intensifies
Customer Count5-6 hyperscalers; concentration falls below 50%3-4 customers; OpenAI adds but Apple delaysStays 2-3 hyperscalers; no meaningful new wins
AI Revenue Path$60-90B by FY2027. Backlog converts. OpenAI + Apple add.$15B → moderate growth. Backlog converts partially.Google relationship deteriorates; others slow. $15B → low growth.
DCF Value$1.58T$456B$150B
Per Share$333.6$96.2$31.6

Sensitivity Analysis

Bull Prob.Bear Prob.Implied ValueΔ from Current
56%30%$193/sh-$48
61%25%$222/sh-$19
71%20%$258/sh+$17
51%35%$175/sh-$66

So What?

Broadcom's custom silicon franchise is one of the most defensible positions in AI infrastructure — but it's concentrated, not diversified. The bull case at 66% probability requires: Google stays as a loyal customer, the B backlog converts on schedule, and at least 2 new customers (OpenAI, Apple) add meaningful revenue by FY2027. The bear case requires just one of these to fail. NVIDIA's response with Vera Rubin directly targets the ASIC TCO narrative. If Rubin delivers its promised 5x inference improvement over Blackwell, the ROI case for custom silicon weakens at the margin. The next 18 months of Broadcom earnings calls will reveal whether the B backlog is real demand or optionality.

Sources

Company Filings
Broadcom FY2025 10-K · Broadcom Q4 FY2025 Earnings Call · Broadcom Q1 FY2026 Earnings Call
Industry Analysis
HSBC Research (Google concentration) · Morgan Stanley TPU analysis · TrendForce XPU market data
Competitive
NVIDIA CES 2026 (Vera Rubin announcement) · Broadcom IR (3.5D XDSiP packaging)
The key question

Will MediaTek fully displace Broadcom on Google TPU v8e SerDes? This is the biggest concentration risk.

Scenario Model$241/share

XPU Customer Base & Design Wins

8 evidence
6XPU CustomersGoogle, Meta, ByteDance, Anthropic, OpenAI + Apple pending

Broadcom's custom XPU business is anchored by six hyperscaler relationships, with Google as the dominant customer since 2014. The partnership model runs deep: each design-in takes 2-3 years from architecture to volume production, creating decades-long relationships. Anthropic's commitment to deploy over 1 GW of Broadcom-designed compute in 2026 (scaling to 3 GW in 2027) and OpenAI's multi-year 10 GW deal validate the ASIC platform model beyond Google.

~78%
Google Concentration
Of ASIC revenue (HSBC estimate). Extreme single-customer risk.
1→3 GW
Anthropic Compute
2026 to 2027 deployment via Google TPU infrastructure
10 GW
OpenAI Deal
Multi-year, deployment starts H2 2026 through 2029
9-10 GW
Total Est. GW (2027)
At $15-20B per GW, implies $135-200B revenue potential

Google concentration is a double-edged sword

Google's dominance as Broadcom's largest XPU customer creates both the biggest opportunity and the biggest risk. While Anthropic's orders flow through Google TPU infrastructure (amplifying the relationship), any diversification by Google toward MediaTek or in-house alternatives would have outsized revenue impact.

ASIC vs GPU: Competitive Dynamics

7 evidence
~20%ASIC Inference ShareASICs carving out inference; NVIDIA retains >90% of training

The GPU vs ASIC debate is the central question for Broadcom's AI semiconductor valuation. Custom ASICs deliver step-function improvements in power efficiency and cost for well-defined workloads at hyperscale, but sacrifice the flexibility and software ecosystem that make NVIDIA GPUs the default for research and rapidly evolving AI models. The market is splitting: NVIDIA dominates training, ASICs are gaining in inference, and the ratio between training and inference spending will determine long-term wallet share.

ASIC vs GPU Trade-offs
Custom ASIC (Broadcom)2-10x efficiency advantage for specific workloads. Lower power consumption. Optimized for inference at scale. 3-5 year design cycle locks in customers.
Merchant GPU (NVIDIA)Dominant CUDA ecosystem. Flexible across training and inference. Faster time-to-market. Vera Rubin closing efficiency gap. >90% training share.

The inference inflection point

As AI shifts from training-dominated spending to inference-dominated spending, the value proposition of custom ASICs strengthens. Hyperscalers running the same model billions of times per day benefit enormously from silicon optimized for that specific workload. The question is whether NVIDIA can close the efficiency gap fast enough with Vera Rubin to prevent structural ASIC adoption.

AI Networking Silicon: Tomahawk, Jericho, SerDes

7 evidence
~90%DC Switch ShareTomahawk + Jericho families dominate data center switching

Broadcom's networking silicon is the hidden gem within its AI semiconductor business. As AI clusters scale from thousands to millions of accelerators across multiple data centers, the networking fabric that connects them becomes equally critical. Tomahawk 6 provides the leaf/spine switching at 102 Tbps, while Jericho4 enables distributed AI computing across data centers separated by tens of miles. The networking share of AI revenue is growing from 33% toward 40%, reflecting the reality that bigger AI clusters need proportionally more networking.

102.4 Tbps
Tomahawk 6
World's first 100+ Tbps switch ASIC, booking at record rate
51.2 Tbps
Jericho4
Fabric routing for 1M+ XPU distributed AI clusters
~$2.8B
AI Networking Revenue
Q1 FY2026, growing to ~$4.3B in Q2 (40% of AI rev)
>$10B
Switch Backlog
Part of $73B total AI backlog

NVIDIA vertical integration is the networking threat

NVIDIA's Vera Rubin platform includes a full networking stack (NVLink 6, Spectrum-6, ConnectX-9, BlueField-4). If NVIDIA succeeds in bundling networking with GPUs, Broadcom's switch silicon could be displaced in GPU-centric clusters. The question is whether open Ethernet (Broadcom) or proprietary NVLink (NVIDIA) wins the networking architecture debate.

Custom Silicon Business Model & Economics

6 evidence
60-65%Custom Silicon Gross MarginChip-level margins; system-level margins lower due to HBM pass-through

Broadcom's custom silicon business model is built on design-in stickiness: each XPU engagement takes 2-3 years from architecture to volume production, creating relationships measured in decades (Google since 2014). Revenue comes from NRE fees during design plus production royalties on volume shipments. The fabless model means Broadcom captures the high-value design work while outsourcing capital-intensive fabrication to TSMC.

67%
Adj. EBITDA Margin
Industry-leading, blending software + semiconductor
$27B
FY2025 FCF
Fabless model means minimal capex vs revenue
57.6%
Semi Segment OPM
FY2025 blended AI + non-AI semiconductor
2-3 years
Design Cycle
Architecture to volume production, creating deep lock-in

System-level sales: revenue growth vs margin dilution

As Broadcom moves from selling standalone ASICs to delivering full-rack solutions with third-party HBM memory, total revenue grows faster but gross margin percentage declines. The pass-through components inflate revenue without contributing margin. Investors should watch the gross margin trend closely as AI system sales become a larger mix.

AI Revenue Growth Trajectory & $100B Target

6 evidence
>$100B2027 AI Revenue TargetCEO says 'significantly' above $100B; analysts see $150-200B potential

Broadcom's AI semiconductor revenue trajectory is one of the most dramatic growth stories in semiconductor history. From effectively zero AI-specific revenue in FY2022 to a $43B annualized run-rate in Q2 FY2026, the ramp has been extraordinary. The path to the $100B+ 2027 target requires sustained acceleration from all six XPU customers, with OpenAI's late-2026 production start being the key incremental catalyst.

~$12.2B
FY2024 AI Revenue
First year of substantial AI contribution
~$20.2B
FY2025 AI Revenue
+65% YoY growth
$10.7B
Q2 FY2026 Guide
Quarterly, annualized ~$43B, +140% YoY
$73B
AI Backlog
To ship over 18 months, supply secured through 2028

From $43B run-rate to $100B+: the math

Bridging from a $43B annualized Q2 run-rate to $100B+ in calendar 2027 requires roughly 2.3x growth in 18 months. This is aggressive but supported by: (1) backlog conversion of the $73B over 18 months, (2) OpenAI production ramp starting late 2026, (3) Anthropic scaling from 1 GW to 3 GW, and (4) Google Ironwood continuing to scale. The question is whether supply (TSMC CoWoS) can keep pace with demand.

AI Semiconductor Risks & Bear Case

7 evidence
78%Key Risk MetricEstimated ASIC revenue from Google alone (HSBC)

The AI semiconductor bear case is concentrated in four risk vectors. First, MediaTek's entry into Google TPU design with a 20-30% cost advantage represents the first credible merchant ASIC competitor. Second, extreme customer concentration means a change in Google's chip strategy alone could derail the $100B target. Third, NVIDIA's Vera Rubin platform aims to close the efficiency gap that drives ASIC adoption. Fourth, hyperscalers are incentivized to commoditize ASIC design to reduce Broadcom's pricing power.

Four-pronged bear case

MediaTek merchant ASIC threat, customer concentration in Google, NVIDIA Vera Rubin closing the efficiency gap, and hyperscaler in-sourcing incentives. Any one of these materializing in isolation is manageable. If two or more converge -- for example, Google diversifying to MediaTek AND NVIDIA narrowing the ASIC advantage -- the combined impact could significantly reduce the addressable market for Broadcom's custom silicon.

Open questions

?Can Broadcom deliver $10.7B AI Q2 given TSMC CoWoS capacity constraints?
?What is actual gross margin impact of system-level AI selling (rack vs chip)?
?Will OpenAI Titan chip enter volume production Dec 2026 or delay to H1 2027?
?How concentrated is AI revenue in Google specifically? HSBC suggests 78%.
?Is Apple Baltra confirmed as a 7th XPU customer for 2027?
?What is the per-customer AI revenue breakdown?