By April 2026 the data center liquid cooling market is projected to reach around 5.58 billion USD, with hyperscale cloud providers such as Microsoft and AWS mandating Direct‑to‑Chip (DLC) cooling for new AI‑heavy infrastructure. As GPU‑driven workloads push racks toward 60–100 kW densities, traditional air cooling is consistently hitting thermal limits, forcing enterprises to treat liquid‑cooled server configurations as the de‑facto standard for high‑density AI environments.
Check: Server Equipment Supplier in China
Why is liquid cooling shifting from option to operational necessity?
Liquid cooling has become an operational necessity because AI and large‑scale HPC workloads demand rack densities that exceed what air‑cooled infrastructures can reliably manage. At 60–100 kW per rack, exhaust temperatures and fan power rise steeply, making air‑only cooling both economically and physically unsustainable. Liquid‑based systems, especially Direct‑to‑Chip (DLC), remove heat directly at the source, enabling sustained high utilization without throttling or oversized cooling plants.
-
Direct‑to‑Chip (DLC) cooling targets CPUs and GPUs directly, avoiding hot‑air mixing and enabling higher GPU counts per rack.
-
High‑density racks powered by servers such as the Dell PowerEdge XE9680‑class and HPE DL3x0 Gen11 platforms require such targeted cooling to maintain stability.
-
Energy efficiency improves via reduced fan power, smaller PDUs, and lower auxiliary cooling loads, which directly improves PUE and total cost of ownership for AI‑focused data centers.
Leading cloud providers now treat DLC as a baseline requirement for AI infrastructure, signaling that liquid cooling is no longer an add‑on but a core operational constraint for any organization scaling GPU‑centric analytics or generative AI.
How does Direct‑to‑Chip (DLC) liquid cooling work in servers?
Direct‑to‑Chip (DLC) liquid cooling routes coolant—typically water or dielectric fluid—through cold plates mounted directly onto CPUs and GPUs, extracting heat before it spreads into the chassis or room. Each server connects to a closed‑loop that feeds into a rack‑level manifold, which then routes to a facility‑wide coolant distribution unit (CDU) and heat‑rejection system, often via external chillers or dry‑coolers.
-
Cold plates sit atop the processor or GPU die, with internal channels that absorb heat through conduction and transfer it to the flowing coolant.
-
Coolant leaves the server, passes through the rack manifold and CDU, then returns at a lower temperature, creating a stable thermal loop.
-
This approach keeps component temperatures tightly controlled even under sustained high‑frequency workloads, which is critical for AI training and inference.
DLC is especially relevant for WECENT‑delivered AI‑ready platforms like the Dell PowerEdge XE9680 and HPE DL320 Gen11, where dense multi‑socket CPUs and 8‑GPU configurations generate concentrated hotspots that air cooling cannot remove efficiently.
What are the thermal limits of air‑cooled high‑density racks?
Air‑cooled racks are generally constrained to about 15–25 kW per rack in typical legacy and mid‑tier data centers, limited by cooling‑air delivery, fan capacity, and raised‑floor or CRAC constraints. Beyond roughly 40–50 kW/rack, standard air‑cooling schemes struggle to maintain safe inlet temperatures, leading to throttling, hotspots, and degraded reliability. In AI‑heavy environments, racks now target 60–100 kW, which is far beyond what air cooling can manage without disproportionate energy and infrastructure overhead.
-
Hot‑aisle containment and high‑speed fans can extend air‑cooling to perhaps 40–50 kW, but this increases noise, static pressure, and power consumption.
-
At 60–100 kW, exhaust temperatures can exceed 50–60°C, forcing oversizing of chillers and air‑handling units, which becomes economically unviable at scale.
-
For AI servers such as the Dell PowerEdge XE9680 (8‑GPU, 6U air‑cooled) and HPE ProLiant DL3x0 Gen11‑class systems, liquid‑based solutions are increasingly required to avoid thermal throttling and maintain consistent performance.
For customers building AI or HPC clusters, WECENT’s engineering team can help determine the exact kW‑per‑rack threshold where air cooling ceases to be viable and where a liquid‑cooling retrofit or new fluid‑ready design becomes operationally necessary.
Which server platforms are most impacted by the liquid‑cooling shift?
GPU‑thirsty AI and HPC servers face the sharpest thermal and efficiency pressure, including 2‑socket, multi‑GPU rack systems and 4‑socket, memory‑heavy compute platforms. The Dell PowerEdge XE9680 (8‑GPU AI training server) and HPE ProLiant DL320 Gen11 are prime examples of platforms already optimized for or evolving toward liquid‑cooling readiness. These systems concentrate dozens of high‑TDP cores and multiple GPUs in a small footprint, generating heat densities that air cooling alone cannot sustain at 100% utilization.
-
Dell PowerEdge XE9680‑class servers (6U, 8 GPUs) are designed as AI‑training workhorses; the XE9680L variant is explicitly liquid‑cooled and can achieve densities up to around 100 kW per rack using Direct‑to‑Chip and rack‑scale cooling.
-
HPE ProLiant DL3x0 Gen11‑series rack servers, including the DL360 and DL560, support closed‑loop liquid‑cooling options for high‑TDP CPUs, making them suitable anchors for hybrid liquid‑air AI clusters.
WECENT supplies and integrates these and similar platforms as part of AI‑ready, liquid‑ready server stacks, helping customers move from proof‑of‑concept to production‑grade AI infrastructure without redesigning their rack philosophy mid‑migration.
Example AI‑ready server types benefiting from liquid cooling
WECENT works with OEM‑certified liquid‑cooling partners to ensure that every Dell‑ and HPE‑based AI server shipped under our catalog meets original‑equipment thermal and reliability standards, even when liquid‑converted at the rack or facility level.
How does liquid cooling improve energy efficiency in AI data centers?
Liquid cooling improves energy efficiency by reducing or eliminating the massive fan power required in air‑cooled racks and lowering the lift energy needed by chillers and CRACs. Because liquid has roughly 3,000‑fold higher heat‑capacity than air, it can remove the same heat with far less flow and far smaller pumps, directly lowering the data center’s PUE and operating‑cost profile. For AI‑heavy facilities, this can translate into 20–30% reductions in cooling‑related energy versus strictly air‑cooled designs.
-
Direct‑to‑Chip systems keep CPU and GPU temperatures lower and more stable, enabling higher sustained clocks and reducing power‑hungry throttling cycles.
-
High‑density racks (60–100 kW) become practical because heat is removed at the source instead of being mixed with air, enabling denser deployments without proportional increases in cooling‑plant size.
-
Warm‑water or hot‑water designs can reject heat via dry‑coolers or district‑heating loops, further cutting chiller and compressor load while aligning with sustainability targets.
WECENT’s consulting engineers help customers model PUE and kW‑per‑rack scenarios before and after liquid‑cooling deployment, so integrators and brands can justify the upfront capex of liquid‑based infrastructure with clear TCO and carbon‑footprint improvements.
Why must AI‑ready servers like the Dell PowerEdge XE9680 and HPE DL320 Gen11 adopt liquid cooling?
The Dell PowerEdge XE9680 and HPE DL320 Gen11 are designed to host 8‑GPU and multi‑CPU configurations that can easily exceed 300–400 W per socket and 300–700 W per GPU, depending on the ASIC generation. In air‑cooled mode, these platforms rely on multiple high‑speed fans and complex airflow paths, but as organizations scale racks to 60–100 kW and run AI training at 80–100% utilization, thermal limits and fan noise become unmanageable. Liquid cooling—especially DLC—becomes mandatory to avoid:
-
Thermal throttling that reduces effective TFLOPS under sustained load.
-
Fan‑limited air delivery, which forces lower rack densities or higher cooling‑plant power.
-
Unreliable thermal behavior in mixed‑load environments where bursts from AI training strain legacy air‑cooling systems.
By aligning with OEM‑mandated DLC and liquid‑ready designs, WECENT ensures that every Dell PowerEdge XE9680‑series and HPE DL320 Gen11 deployment shipped under our brand can scale into AI‑heavy, high‑density use cases without redesigning the entire rack or cooling infrastructure later.
How can WECENT help enterprises transition to liquid‑cooled AI servers?
WECENT acts as a full‑stack IT equipment supplier and authorized agent for Dell, HPE, and other leading brands, with deep expertise in AI‑ready server configurations, liquid‑cooling readiness, and custom rack integration. WECENT helps customers:
-
Select the right platform (e.g., Dell XE9680 vs. XE9680L, HPE DL360 Gen11 with closed‑loop liquid‑cooling) based on target kW‑per‑rack and GPU density.
-
Design fluid‑ready rack layouts that integrate manifolds, CDUs, and facility‑level cooling loops without over‑provisioning.
-
Provide OEM‑certified parts and service, including liquid‑cooling kits, leak‑tested loops, and long‑term maintenance support from WECENT‑trained engineers.
Because WECENT also offers NVIDIA Tesla‑ and H‑series GPUs, Dell PowerStore, PowerFlex, and EMC‑line storage alongside servers, WECENT can deliver end‑to‑end AI‑infrastructure stacks pre‑qualified for liquid‑cooled environments, from rack‑and‑power planning to on‑site deployment and remote monitoring.
WECENT Expert Views
“Liquid cooling is no longer just a niche HPC feature; it has become the foundational thermal strategy for any organization planning AI‑heavy racks above 40 kW,” says a senior infrastructure architect at WECENT. “Direct‑to‑Chip cooling allows us to keep Dell PowerEdge XE9680‑class and HPE DL320 Gen11 platforms running at full GPU utilization without spikes in PUE or acoustic discomfort. For system integrators and brand owners, WECENT’s role is to de‑risk the transition: we ensure every liquid‑cooled rack is OEM‑compatible, leak‑tested, and reliably supported under our global warranty network.”
When should an organization move from air‑cooled to liquid‑cooled racks?
Organizations should consider moving to liquid cooling when they observe any of these triggers:
-
Rack densities approaching or exceeding 40–50 kW per cabinet, especially with multiple 8‑GPU AI servers.
-
Repeated reports of thermal throttling, fan errors, or elevated inlet temperatures in air‑cooled AI or HPC racks.
-
Plans to scale Gen‑AI or LLM training workloads that require sustained high‑GPU utilization.
For many customers, the sweet spot is a hybrid transition: keep 1–2 U legacy servers air‑cooled while dedicating new AI racks to DLC‑ready platforms such as the Dell PowerEdge XE9680L and HPE DL3x0 Gen11. WECENT can help design a phased rollout that extends existing air‑cooled investments while reserving new rack space for liquid‑ready AI clusters.
Where should liquid cooling be implemented—chip, rack, or room level?
-
Chip‑level (Direct‑to‑Chip): Best for AI‑heavy, GPU‑centric racks where per‑chip heat must be removed efficiently. This is now standard for hyperscaler AI facilities.
-
Rack‑level (in‑rack loop, rear‑door heat exchangers): Useful as a bridge for existing air‑cooled data centers that want to push past 40–50 kW/rack without full DC reform.
-
Room‑level (immersion cooling, large‑scale loops): Typically reserved for specialized HPC or hyperscale AI campuses with very high homogeneity and long‑term planning.
For most enterprise and mid‑tier customers, a chip‑ plus rack‑level approach using OEM‑certified DLC manifolds on Dell‑ and HPE‑brand servers, supported by WECENT’s integration and warranty services, delivers the best balance of performance, reliability, and cost.
How can you choose the right liquid‑cooling solution for your AI stack?
Choosing the right liquid‑cooling solution depends on rack density, budget, and facility readiness:
-
If you are building new AI racks from scratch, prioritize Direct‑to‑Chip‑ready servers (e.g., Dell XE9680L, HPE DL3x0 Gen11) and partner with WECENT to design a rack‑ and CDU‑compatible loop.
-
If you are retrofitting an existing air‑cooled data center, explore rack‑level loops or rear‑door heat exchangers first, then migrate the most demanding AI racks to full DLC.
-
If you operate mixed‑workload environments, consider hybrid cooling—air for legacy systems and liquid for AI‑heavy racks—orchestrated by WECENT’s engineering and deployment teams.
WECENT’s catalog and technical support ensure that every cooling solution remains brand‑authorized, leak‑tested, and maintainable under a single point of contact, minimizing operational risk as you scale AI infrastructure.
Key takeaways and actionable advice
Liquid cooling has shifted from an optional enhancement to an operational necessity for any organization planning high‑density AI workloads on modern GPU‑server platforms. Direct‑to‑Chip (DLC) cooling is now the baseline for 60–100 kW racks, enabling stable performance, lower PUE, and fewer thermal failures. To implement this shift effectively, organizations should:
-
Audit current rack‑level densities and GPU utilization to identify where air‑cooling is hitting its limits.
-
Select DLC‑ready or liquid‑optimized servers such as the Dell PowerEdge XE9680L and HPE DL3x0 Gen11, sourcing them through a certified supplier like WECENT to ensure warranty and support.
-
Plan a hybrid transition path that preserves air‑cooled legacy racks while building new AI racks with chip‑ and rack‑level liquid cooling.
-
Partner with an end‑to‑end infrastructure provider that can supply servers, GPUs, storage, and liquid‑ready cooling under one support umbrella, simplifying operations and reducing downtime.
By treating liquid cooling as a core infrastructure requirement rather than a late‑stage optimization, enterprises can scale AI with greater confidence, lower energy costs, and tighter integration between compute and cooling.
FAQs
Bold: Why is Direct‑to‑Chip cooling becoming mandatory for AI racks?
Direct‑to‑Chip cooling removes heat directly from CPUs and GPUs before it spreads into the chassis, enabling stable operation at 60–100 kW rack densities. This is essential for hyperscalers and enterprises running AI workloads that would otherwise throttle under air‑only cooling.
Bold: Can the Dell PowerEdge XE9680 and HPE DL320 Gen11 still use air cooling?
Yes; both platforms ship in air‑cooled variants, but as AI workloads scale toward higher rack densities and GPU counts, liquid‑cooling options become necessary to maintain full‑load performance and thermal stability.
Bold: How does liquid cooling reduce energy costs in AI data centers?
Liquid cooling reduces fan power and chiller load, improving PUE and lowering total energy consumption by up to 20–30% compared with equivalent air‑cooled designs, especially at 60–100 kW per rack.
Bold: What role does WECENT play in liquid‑cooled AI infrastructure?
WECENT acts as an authorized IT equipment supplier and custom integration partner, providing OEM‑certified Dell, HPE, and NVIDIA‑based servers, GPUs, and liquid‑ready cooling solutions, plus end‑to‑end deployment, maintenance, and warranty support.
Bold: When should a company start planning for liquid‑cooled racks?
Companies should start planning when rack densities approach 40–50 kW, when AI training or LLM workloads require sustained high‑GPU utilization, or when existing air‑cooled racks show thermal or fan‑related issues.





















