NVIDIA Data Center networking revenue grew 142% YoY per 10-K FY2026 MD&A, driven by NVLink compute fabric for GB200/GB300 and growth of Ethernet and InfiniBand platforms. The business has three pillars: (1) NVLink scale-up fabric, the primary growth driver; (2) Spectrum-X Ethernet, which surpassed $10B ARR by Q2 FY2026; and (3) Quantum InfiniBand, where NVIDIA is the sole vendor. The ~90% networking attach rate to GPU systems creates structural lock-in: even custom ASIC customers buy NVIDIA networking.
NVIDIA captured up to 25.9% of the datacenter Ethernet switch market in Q2 CY2025 (surpassing Cisco and Arista), though share fluctuates with shipment timing. The roadmap extends through Vera Rubin (NVLink 6.0 at 3,600 GB/s, Spectrum-X Photonics with CPO, BlueField-4) and Feynman (BlueField-5, Kyber copper/CPO scale-up). Key competitive dynamics: NVIDIA benefits regardless of Ethernet vs InfiniBand outcome (owns both), while Broadcom's 80% switching silicon share and ~1 year lead with Tomahawk 6 pose the main competitive threat at the silicon level..
Networking deepens the ecosystem moat
NVIDIA's networking revenue explosion demonstrates that the company is becoming a full-stack infrastructure provider. The high attach rate means even customers exploring custom compute silicon still depend on NVIDIA for interconnect.
What fraction of NVIDIA's $31.5B FY2026 networking revenue is NVLink (scale-up) vs Spectrum-X (Ethernet) vs InfiniBand? NVIDIA does not disclose this breakdown.
Spectrum-X is NVIDIA's purpose-built Ethernet networking platform for AI factories, combining Spectrum-4 switches with BlueField-3 SuperNICs to deliver 1.6x better performance than off-the-shelf Ethernet. Launched in 2024, it surpassed $10B annualized run rate by Q2 FY2026 (Aug 2025) and captured 25.9% of the datacenter Ethernet switch market in Q2 CY2025 ($2.26B revenue, +647% YoY), temporarily surpassing both Cisco and Arista. Major customers include xAI (100K+ GPU Colossus cluster achieving 95% throughput vs 60% standard Ethernet), Meta (integrated into FBOSS), Oracle (giga-scale AI factories), and OpenAI (Stargate project).
The platform's key competitive advantage is its tight GPU-to-network co-optimization: congestion control, adaptive routing, and in-network telemetry are tuned specifically for collective communication patterns in AI training. The next-generation Spectrum-X1600 (SN6800/SN6810, 102.4-409.6 Tbps) ships H2 2026, alongside Spectrum-X Photonics with co-packaged optics — the world's first 1.6 Tbps CPO switch. NVIDIA's bundling strategy (GPU + NVLink + switch + SuperNIC) poses a direct threat to Arista Networks, whose stock fell 6% when Meta's Spectrum-X adoption was announced. However, Broadcom's Tomahawk 6 (102.4 Tbps) shipped ~1 year ahead of Spectrum-X1600, and the overall AI Ethernet market is expanding fast enough ($32.5B in 2025, projected $110B by 2030) for multiple winners..
Networking deepens the ecosystem moat
NVIDIA's networking revenue explosion demonstrates that the company is becoming a full-stack infrastructure provider. The high attach rate means even customers exploring custom compute silicon still depend on NVIDIA for interconnect.
NVLink is NVIDIA's proprietary scale-up interconnect that connects GPUs within a single domain at bandwidth far exceeding any alternative. The fifth-generation NVLink (Blackwell) delivers 1,800 GB/s per GPU, connecting 72 GPUs in the GB200 NVL72 rack for 130 TB/s aggregate bandwidth -- 36x faster than 400 Gbps Ethernet. The sixth-generation NVLink (Vera Rubin, H2 2026) doubles to 3,600 GB/s per GPU and 260 TB/s per rack.
NVLink Fusion, announced at Computex May 2025, is strategically pivotal: it provides a licensable NVLink chiplet that allows custom ASICs (including AWS Trainium4) to connect via NVLink 6, ensuring NVIDIA earns networking revenue even from customers building competing accelerators. This directly counters the ASIC threat: CSP in-house ASICs are projected to grow 44.6% in 2026 vs 16.1% for GPUs (TrendForce), but NVLink Fusion means those ASICs still need NVIDIA's interconnect fabric. The only open-standard alternative, UALink 1.0, supports just 200 GT/s per lane (vs NVLink 5's 1,800 GB/s per GPU) and won't reach production until late 2026-2027. Jensen Huang frames the value proposition: networking improves AI factory efficiency from 65% to 85-90%, effectively making the networking investment free relative to the $50B+ infrastructure cost..
Networking deepens the ecosystem moat
NVIDIA's networking revenue explosion demonstrates that the company is becoming a full-stack infrastructure provider. The high attach rate means even customers exploring custom compute silicon still depend on NVIDIA for interconnect.
NVIDIA is the sole merchant supplier of InfiniBand networking equipment through its Quantum switch line (acquired via Mellanox in 2020), controlling an estimated 82% of InfiniBand port shipments. The latest Quantum-X800 XDR platform delivers 144 ports at 800Gbps with sub-100ns latency and 14.4 TFLOPS of in-network compute via SHARPv4 -- a 9x improvement over the prior NDR generation. Key deployments include Microsoft Azure's 4,608-GPU GB300 NVL72 cluster for OpenAI (the world's first large-scale Blackwell Ultra InfiniBand deployment) and Oracle Cloud's Zettascale superclusters.
Despite Ethernet overtaking InfiniBand in overall AI back-end network market share (Ethernet captured over two-thirds of AI cluster switch sales in 2025), InfiniBand also grew strongly: Dell'Oro reported InfiniBand switch sales 'surged' in Q2 2025 driven by Blackwell Ultra 800Gbps demand. The InfiniBand market was valued at $25.7B in 2025 with projections to $127B by 2030 (37.6% CAGR). NVIDIA benefits regardless of the IB vs Ethernet outcome since it owns both Quantum (InfiniBand) and Spectrum-X (Ethernet). The only meaningful InfiniBand competitor is Cornelis Networks (Intel Omni-Path spinout), which targets price-sensitive HPC workloads with its CN5000 400Gbps platform but has negligible market share in hyperscale AI..
Networking deepens the ecosystem moat
NVIDIA's networking revenue explosion demonstrates that the company is becoming a full-stack infrastructure provider. The high attach rate means even customers exploring custom compute silicon still depend on NVIDIA for interconnect.