VRT/Thermal Management

Thermal Management

$120/share(45% of VRT)anchored

Thermal Management is Vertiv's highest-growth segment and the primary AI infrastructure play, generating ~$3.2B in FY2025 revenue with approximately 40% growth. AI GPU racks consume 40-100+ kW per rack versus 5-15 kW for traditional compute, creating a structural 3-5x increase in cooling requirements per data center. Vertiv has rapidly expanded its liquid cooling portfolio with productized solutions: CoolLoop Trim Coolers for rear-door cooling, 240kW CoolCenter Immersion systems, and CDUs (Coolant Distribution Units) up to 600kW. The company spent approximately $1B on Q4 2025 acquisitions including PerchRight to scale liquid cooling capacity. Liquid cooling demand is accelerating at 25-40% annually, driven by NVIDIA GPU-powered AI infrastructure that cannot be adequately cooled by traditional air systems alone. The key debate is the pace of liquid cooling adoption: bulls see a rapid transition driven by power density requirements, while bears argue that air cooling improvements and hybrid approaches will slow the transition.

AI GPU racks consume 40-100+ kW per rack versus 5-15 kW for traditional compute, creating a 3-5x increase in cooling requirements

Vertiv Technical Specifications / Industry Analysis
Scenario Model$120/share

Liquid Cooling Transition

6 evidence

Liquid cooling is transitioning from a niche technology to a mainstream requirement for AI data centers. NVIDIA's GPU-powered AI servers generate heat densities that exceed the practical limits of air cooling at scale, pushing data center operators toward direct-to-chip liquid cooling, rear-door heat exchangers, and immersion cooling. Vertiv's CEO stated that liquid cooling capacity is growing 'really, really, really rapidly,' and the company has invested $1B+ in acquisitions to scale manufacturing. The transition has three phases: Phase 1 (2024-2026) sees early adopters deploying liquid cooling for AI clusters; Phase 2 (2027-2029) sees mainstream adoption as AI racks standardize at 50-100+ kW; Phase 3 (2030+) sees liquid cooling as default for all new data center builds. The risk is timing: if air cooling improvements extend the transition, or if data center operators delay liquid cooling adoption due to complexity, Vertiv's premium valuation for this segment may not be justified.

Vertiv CEO stated liquid cooling capacity is growing 'really, really, really rapidly' in response to AI data center demand

Vertiv CEO Commentary / Earnings Call

Data Center Power Density Shift

4 evidence

The shift to AI workloads is fundamentally changing data center power density. Traditional compute racks operate at 5-15 kW per rack, while AI GPU racks (NVIDIA DGX, HGX) consume 40-100+ kW per rack. Next-generation GPU platforms may push rack densities to 150 kW or higher. This power density increase has a multiplicative effect on infrastructure spend: higher power per rack means proportionally more cooling capacity, more power distribution equipment, more robust UPS systems, and more sophisticated monitoring. For Vertiv, this means the infrastructure spend per data center is increasing 3-5x for AI-focused facilities, even before accounting for the increase in total data center square footage being built.

AI GPU racks (NVIDIA DGX/HGX) consume 40-100+ kW per rack, with next-generation platforms potentially exceeding 150 kW per rack

NVIDIA/Data Center Industry Specifications