I’ve just finished writing a report on data center cooling (which should be published later in the quarter), and one of the recommendations was that data center operators should set the temperature to at least 77F (25C), per the recommendations of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE), as a way to reduce energy consumption in the data center. So I was interested to note that ASHRAE has just changed that recommendation to 80.6F (27C) measured at the inlet of the equipment in the rack (coverage here, here, and here).
But before you go and adjust the thermostat in your data center, there are a number of potential impacts that need to be understood:
- Fans in equipment in the racks will have to work harder to keep the equipment temperature in range, so they will be noisier. More importantly, more active fans in equipment will move some of the power load from the HVAC equipment to the data center power distribution network. That’s fine if you have ample headroom in power delivery to the racks, but if you are operating at the edge with respect to power delivery then you might start tripping power breakers.
- The hot-aisle will get hotter. If you assume that the temperature difference between the hot- and cold-aisle is 55F (13C), then hot-aisle is going to be a toasty 135F (57C), not what you would a call a pleasant working environment if you have to work at the back of an equipment rack.
- The recommendation is for inlet temperatures of IT equipment, and these will vary depending on where the equipment is located in the rack. So wandering around the cold-aisle with a thermometer isn’t good enough, you need to place several temperature sensors into the racks to get an understanding of the temperature gradient from bottom to the top of the rack.
- Increasing data center temperatures will allow for a more humid environment which may lead to water condensing in the computer room air conditioners, reducing their efficiency.
Perhaps the most important concern for IT will be the perception that hotter equipment fails more frequently than cooler equipment. On the surface, this is correct, if you let equipment overheat, then it’s more likely to fail. But the ASHRAE recommendation isn’t intended to make the equipment run hotter. The increase in cold-aisle temperature will be counteracted by the fans in the equipment so there should be no net increase in the internal temperature.
But the question you have to ask at the end of the day is how much difference will this make to overall data center energy consumption? The answer to that question depends on what you’re current operating temperatures is. For example if you are at the old ASHRAE recommendation of ~77F (25C) then the answer is “not much”. But if you are operating at temperatures substantially below the new recommendation (which is surprisingly common) then you really need to take a hard look at how you operate your data center, because modern IT equipment is much more tolerant of high temperatures than we give it credit for. Consider the following table which lists the operating temperature range for a sample of common IT equipment:
Device | Temperature range |
Dell PowerEdge r805 | 50F-95F (10C-35C) |
Cisco Nexus 5000 | 32F-104F (0C-40C) |
Sun SPARC Enterprise M9000 | 41F-89.6F (5C-32C) |
NetApp FAS 6000 | 50F-104F (10C-40C) |
Add the fact that mean time between failure (MTBF) numbers are calculated at the high end of the range (i.e. 95F or 35C for the PowerEdge r805) and it’s clear that 80F isn't extreme as far as the equipment is concerned
Posted by: Nik Simpson
Comments