2020 Trends in Data Center Enclosures
March 2, 2020
According to Forbes, U.S. data centers use more than 90 billion kilowatt-hours of electricity each year, requiring roughly 34 giant (500-megawatt) coal-powered plants. Global data centers use about 40% more than the entire United Kingdom.
That’s a lot of electricity, and a lot of heat being generated in the process.
Cooling is a critical component of the data center ecosystem – and a major line item in a data center’s budget. In fact, as much as 40% of all energy used in some data centers goes directly to cooling.
But without proper cooling, the systems within data centers could fail. If a server is kept too hot – but not so hot that it turns itself off – the data center experiences downtime, delays, latency – even hardware failure.
Cooling is becoming an even greater issue as packing densities and processor capacity grows. According to 451 Research (2019) 45% of companies surveyed said they expect to see average density of 11 kW per rack or higher at some point in 2020. That’s a significant hike since 2014, when just 18% of the survey’s respondents reported densities beyond 10kW.
Traditional Water-based Cooling Options
For decades, data centers have used chilled water systems to deliver cold air to its racks.
Chilled water systems. This type of cooling system is commonly used in mid-to large-sized data centers; it uses chilled water to cool the air being brought in by computer room air handlers (CRAHs). Its functionality is simple: Chilled water flows through a cooling coil inside the unit, which then uses fans to draw air from outside the facility. Because they work by chilling outside air, CRAH units are much more efficient when used in locations with colder annual temperatures. Computer room air conditioners (CRACs) work on the same principle but are powered by a compressor that draws air across a refrigerant-filled cooling unit.
Neither of these methods is very energy-efficient, but have been popular because the equipment itself is relatively inexpensive.
Another chilled water method is the raised floor cooling system. In this option, cold air from a CRAH or CRAC is forced into the space below the raised floor of the data center. Perforated tiles allow the cold air to enter the space in front of server intakes. The air passes through the server and, now heated, is returned to the CRAC/CRAH to be cooled again.
Some of the fundamental problems with chilled water systems are their high energy costs, potential mechanical failure, significant space requirements, and the fact that these systems introduce moisture into the data center – a real issue for hardware performance.
Until recently, data centers had no other options for cooling down their racks. But with developments in liquid cooling, many data centers are beginning to try new methods for solving their ongoing – and growing – heat problems.
Today’s Liquid Cooling
Chilled water systems use liquids to absorb and carry heat from the hardware, but they do it using air, while liquid cooling uses liquid to take that heat away – a much more efficient and effective cooling solution. According to the Center of Expertise for Energy Efficiency in Data Centers, liquid cooling helps reduce energy usage because liquids have a much larger capacity for heat than does air. Air cooling requires a great deal of power and introduces both pollutants and condensation into the data center; liquid cooling is cleaner, more targeted and scalable.
Newer Technologies
Direct-to-chip cooling. This is a liquid cooling method that uses pipes to bring coolant directly to a plate that’s integrated into a motherboard’s processors to disperse its heat. Because this system cools processors directly, it’s one of the most effective forms of cooling; the downside, though, is that only a portion of the server components are cooled with liquid, meaning fans are still needed for this system to operate.
Evaporative cooling. This kind of system manages temperature by exposing hot air to water, causing the water to evaporate and pull the heat out of the air. The average cost of evaporative cooling is 25% of traditional HVAC systems (so low, in fact, that this method is sometimes called “free cooling”). While this system is very energy efficient (it doesn’t use CRAC or CRAH units), it requires a lot of water. Sometimes this option is called swamp cooling, and for good reason: evaporative cooling brings humidity into the data center, a potential risk to equipment performance.
Immersion systems involve submerging the hardware itself into a bath of non-conductive, non-flammable dielectric fluid. Dielectric fluid absorbs heat more efficiently than air, and as heated water turns to vapor it condenses and falls back into the fluid to aid in cooling. The downsides of immersion cooling include the open tank design (removing the coolant from the servers is labor-intensive if you decide to switch to another method), the weight of the fluid-filled tanks, their large footprint, and serviceability of the system.
Evolving Cooling Needs…
These are some of the data and hardware advancements that are necessitating new, more efficient and effective cooling solutions:
- Accelerators. In recent years, in part because CPU performance growth has slowed, accelerator processors – mainly GPUs – are becoming more common in enterprise data centers. Accelerators are being used to enhance performance in online data mining, analytics, engineering simulation, video, live media and other latency-sensitive services
- Increased rack density. Most data centers that Uptime Institute tracks now have some racks that are over 10kW, and 20% have a rack that’s 30kW or higher
- Increase in the number of Solid State drives, which can be cooled with immersion solutions
- Helium-filled HDDs storage hardware. The newest generation of helium-filled HDDs requires the units to be sealed, making them suitable for liquid cooling
- Edge computing. Factory floors, retail, wireless towers and others are requiring reduced latency, and this move is driving the demand for data centers to be placed right where the data is being generated and utilized. In many of these scenarios, traditional cooling options aren’t available or appropriate
…and Emerging Technologies
Other future innovations rely on smart assistant/AI technologies. It’s reported that data centers use 75% more cooling than needed. If true, a smart assistant then monitors heat and humidity within the cabinet and alerts data center staff when and how much cooling is actually needed could help them save on energy costs. This kind of solution uses smart cooling and machine learning to read CPU and GPU temperatures and trigger cooling as needed.
Conclusion
It’s not uncommon for a data center’s cooling system to use as much – or more – energy as the servers and other equipment contained in it. With the surge in density and processor capacity, there’s pressure on data center managers to address the massive amount of heat being generated. Smart advancements in technology have made it possible to reduce those costs while ensuring optimum performance of the servers, and partnering with an enclosure manufacturer with multiple cooling options will enhance those savings.
With Rittal liquid cooling systems, data centers can easily reduce their energy costs. Thanks to intelligent control and the flexibility to add additional fans, partial load efficiencies can increase energy savings of up to 50% at the same volumetric flow and constant cooling output. With optimized operating costs, Rittal’s Liquid Cooling Packages (LCPs) precisely and effortlessly dissipate heat losses of up to 60 kW per enclosure.
https://www.rittal.us/contents/trends-in-data-center-enclosures/