NVLink is NVIDIA's proprietary scale-up interconnect that connects GPUs within a single domain at bandwidth far exceeding any alternative. The fifth-generation NVLink (Blackwell) delivers 1,800 GB/s per GPU, connecting 72 GPUs in the GB200 NVL72 rack for 130 TB/s aggregate bandwidth -- 36x faster than 400 Gbps Ethernet. The sixth-generation NVLink (Vera Rubin, H2 2026) doubles to 3,600 GB/s per GPU and 260 TB/s per rack.
NVLink Fusion, announced at Computex May 2025, is strategically pivotal: it provides a licensable NVLink chiplet that allows custom ASICs (including AWS Trainium4) to connect via NVLink 6, ensuring NVIDIA earns networking revenue even from customers building competing accelerators. This directly counters the ASIC threat: CSP in-house ASICs are projected to grow 44.6% in 2026 vs 16.1% for GPUs (TrendForce), but NVLink Fusion means those ASICs still need NVIDIA's interconnect fabric. The only open-standard alternative, UALink 1.0, supports just 200 GT/s per lane (vs NVLink 5's 1,800 GB/s per GPU) and won't reach production until late 2026-2027. Jensen Huang frames the value proposition: networking improves AI factory efficiency from 65% to 85-90%, effectively making the networking investment free relative to the $50B+ infrastructure cost..
Networking deepens the ecosystem moat
NVIDIA's networking revenue explosion demonstrates that the company is becoming a full-stack infrastructure provider. The high attach rate means even customers exploring custom compute silicon still depend on NVIDIA for interconnect.
What fraction of NVIDIA's $31.5B FY2026 networking revenue is NVLink scale-up vs Spectrum-X vs InfiniBand? NVIDIA does not disclose this breakdown.