ASHRAE technical committee recommendations have been out for some time, with the maximum recommended (the lowest temperature) still at 80.6 degrees F (27 degrees C). Equipment, whether modern or made 20 years ago can operate at this temperature – constantly – and lead to a considerable savings via less cooling. Some operators like to have a buffer as part of their uptime protection strategy, as they are more risk averse and feel running cooler will give them a margin of safety against overheating and equipment shutdown or failure. Although a lot depends on the facility and data center rooms, many times the amount of air changes and heat exchange doesn’t provide much of a safety margin for their equipment. If power is cut to the cooling equipment and/or fans, the data center temperature can spiral upward quickly, reaching temperatures far exceeding normal operations within seconds and causing shutdowns within minutes. Studying the data center cooling scheme and management can help immensely. Realizing that cooling equipment needs additional redundancy can be a much bigger saver than attempting to precool a data center. After ensuring the cooling systems redundancy is able to react during outages, whether from partial equipment failures to full power outages, the data center can aim to increase the temperatures and ease the energy budget. Monitoring the IT and cooling equipment with sensors will create a better picture of how the data center is performing and can allow for the facility to operate at those higher temperatures in a predictable fashion, eliminating the mythical need for a temperature buffer.