Tag Archives: Allied Control

Scalability is the Last Hurdle for Liquid Cooling to Change the Game for 50GW Worth of Data Centers

Late in 2014, Hong Kong-based startup Allied Control announced that its immersion-cooled server containers have 1.4 MW of server capacity and deliver a power usage effectiveness (PUE) of less than 1.01 (cooling only). PUE is the amount of energy expended for cooling, related to the data center’s server energy input, and is the de-facto standard for measuring data center performance; the theoretical optimal PUE is 1.0. As an example, Facebook’s average facility PUE is 1.08-1.1, whereas the industry average is approximately 60% higher (see the report “Blowing Hot Air – Uncovering Opportunities to Cool the World’s Data Centers“– client registration required). Allied Control stated that the containers are capable of controlling a 500 kW heat load and allowing for a high density of server racks (watt / ft2). We alluded to the potential of liquid cooling technologies for data centers early in 2014 (refer to analyst insight “Data centers need to get over their hot air” — client registration required), but with more competitors coming to the fore, we need an in-depth comparison.

At present, there are more than 3,500 data centers in 103 countries (see Data Centers Map’s study) that demand approximately 50 GW of electricity globally (40% more than New York City on the hottest day of the year). Based on this fact, operators are scrambling to reduce the facility PUEs of their data centers, as energy is a much higher share of operational cost (OpEx) for this building segment compared with office buildings, for example. While operating, servers in data centers produce heat that must be extracted to keep the servers running – akin to a power plant whose reactor vessel must constantly be cooled. Allied Controls’ PUE claims appear way ahead of even Google, however it is important to differentiate between cooling PUE and facility PUE. These two values differ depending upon where the boundary is drawn around energy use – cooling PUE only considers cooling energy, while facility PUE examines electrical losses in distribution cabling and even at the substation. Google, which operates some of the largest data centers, has reduced its data centers’ PUE drastically over time, from 1.21 to 1.12 (facilities larger than 1MW), keeping well below the industry standard of 1.8-1.89 (see figure 3, “Power Usage Effectiveness (PUE) is Trending down Year On Year” — client registration required). Google is able to achieve these low PUEs with strategies such as hot-aisle containment, air-based free cooling, and direct evaporative cooling. On the other hand, small-scale data centers (smaller than 1 MW) often use refrigerant-based computer room air conditioning (CRAC) cooling systems (see the report “Blowing Hot Air – Uncovering Opportunities to Cool the World’s Data Centers” — client registration required), which drives PUEs much higher. However, these systems have high costs (average of 100 W / ft2) and a major issue with DX systems is their limited scalability. Over the past three months, we profiled companies that provide liquid-based cooling systems with claims of reduced PUE and cost, as well as increased capacity, density, and scalability. Rather than using air as a heat transfer medium, they use fluid to extract the heat from servers via heat exchanging plates or submerge all electronic components (server motherboards) into their dielectric fluid.Fluid cooling works for a manufacturing facility or power station, so on the surface this approach seems reasonable for a data center. The performance of the liquid cooling products, however, is all over the map, as we show in the table below.

Insight_1_18_15

1 Compared to air-based CRAC systems
2 Optimistic claim per our estimate
3 Based on case studies
4 Theoretical

Firstly, their heat transfer mediums (dielectric fluids) have different characteristics. Iceotope (client registration required) relies on 3M’s dielectric fluid, branded as “Novec,” to submerge the motherboards whereas others use proprietary dielectric fluids. When we spoke with executives at these startups, companies like LiquidCool Solutions (client registration required) and Green Revolution Cooling (client registration required) (GRC) claimed their proprietary fluid performs better than Novec; however, they could not substantiate these claims. “Max. Coolant Temperature” is the maximum temperature of the cooling loop into which server heat is dumped, and it varies widely among these companies. The gap of 8 °C (between 45 °C and 53 °C) is an important difference because systems that use higher “Max. Coolant Temperature” require less energy input, which has a direct impact on the cooling PUE value. Despite having the highest “Max. Coolant Temperature” level, GRC has the highest PUE level among its competitors. Its important to note that Iceotope’s is one of the few which actually installed pilot projects and has calculated a PUE based on a case study, whereas others’ are companies’ claims or their theoretical values. Despite the fact that Allied Control uses the same dielectric fluid as Iceotope, it claims to have a lower cooling PUE; Allied Control has to prove its claims with a case study or pilot project. To compare server density, Iceotope boasts the highest density, however, its two major problems are its yet unstructured pricing and limited ability to scale for larger data centers. During the interview, Iceotope stated that it targets small-scale data centers and based on that reason, scalability is not their primary focus. Among the remaining competitors, GRC has the highest density with competitive pricing. Another advantage of GRC is high capacity and a plausible “Usable Floor-Space Increase” claim whereas Clustered Systems’ claim is overly optimistic per our estimate; liquid-based cooling systems do not require ducting and fans inside a data center which increases usable floor-space.

According to one survey, existing data centers’ facility sizes are equally divided (“Data Center Facilities Are Equally Divided by Facility Size” — client registration required), however we are seeing the most growth in the mega-scale segment – for example, Google’s recent decision to build a massive 120 MW data center (client registration required). Developers and owner-operators consider many factors in citing a new facility, such as climate, electricity supply security, and availability of human capital. A prevailing trend is that large data centers are often built in cool climates, such as Google’s new data center north of The Netherlands; however, especially hot climates need mechanical cooling, as they cannot depend on free (or evaporative) cooling alone. Singapore is a high-interest area for data center developers. It is for this reason that liquid cooling holds strong potential – it can help ease the energy demand of data centers reaching into new areas with inhospitable climates that necessitate mechanical cooling. Clients are advised to monitor liquid-based cooling technologies. In particular, clients are advised to monitor the progress of Green Revolution Cooling; the company offers a scalable product with high density and competitive pricing, and has intentions to expand to regions with high data center growth. Lux will continue to monitor the developments in this market, with a keen focus on scalability.