After the relatively quick and inexpensive methods of reducing energy consumption have been exhausted, there are many other cost-saving measures to consider. In this section, we discuss a few of the most promising opportunities.
Calculate performance metrics. Two metrics that are widely used to characterize a data center’s power usage are power usage effectiveness (PUE) and data center infrastructure efficiency (DCiE). Both were developed by the Green Grid; detail on Green Grid’s method of calculation is available in the white paper The Green Grid Data Center Power Efficiency Metrics: PUE and DCiE. In general, DCiE is the ratio of IT equipment power to the total power of the data center, and PUE is its reciprocal (the ratio of total data center power to IT equipment power). Several other variations of PUE have been proposed to help compare data center performance during specific stages of operation, but none have yet been widely adopted.
Install infrastructure-management software. Data center infrastructure-management (DCIM) software tools can simplify the benchmarking and commissioning process, and they can continuously monitor system performance via network sensors. Their real-time benchmarking can notify users of any system failures and validate efficiency improvements. Additionally, the data the software collects will help improve the effectiveness of other measures, including airflow management.
Efficient IT Equipment
Buy energy-efficient servers. Standards for server efficiency are still in their infancy, so it can be difficult to know which servers will use less energy before you buy one. Many blade servers (thin, minimalist hardware) make use of low-power processors to save on energy and hardware requirements. Microsoft engineers report that low-power processors offer the greatest value for the cost in terms of computing performance per dollar per watt. The Standard Performance Evaluation Corporation (SPEC) website contains a list of Published SPEC Benchmark Results of energy use across a wide range of server models. Also, look to professional consortiums or trade organizations for recommendations and comparisons of server models according to energy use. For example, the Green Grid’s Library and Tools web page provides content and tools to assist in benchmarking, evaluating, and comparing equipment and facility performance.
Spin fewer disks. Implementing a massive array of idle disks (MAID) system can save up to 85 percent of storage power and cooling costs, according to manufacturers’ claims. Typically, data is stored on hard disk drives (HDDs) that must remain spinning (and therefore constantly consume energy) for their information to be retrieved. MAID systems, though, catalog information according to how often it is retrieved and place seldom-accessed data on disks that are spun down or left idle until the data is needed. One disadvantage to this approach is a drop in system speed because the HDDs must spin up again before the data is accessible. The lower operating costs and reduced energy consumption of MAID systems come at the expense of higher latency and limited redundancy: If a MAID system is used for backup data storage, the data it contains may not be instantaneously available if another server fails because the disk will first have come up to speed before it can be used.
Replace spinning disks with solid-state drives. Although experts disagree about the exact amount of energy savings associated with flash-based solid-state drives (SSDs), there’s general consensus that SSD energy consumption is typically less than that of HDDs. SSDs also offer a faster read time, delivering energy savings for storage systems without the latency and redundancy limitations of MAID storage.
Separate hot and cold air streams. Most data centers suffer from poor airflow management, which has two detrimental effects. First, the mixed air recirculates around and above servers, warming as it rises, making the servers that are higher up on the racks less reliable. Second, data center operators must set supply-air temperatures lower and airflows higher to compensate for air mixing, thus wasting energy.
Setting up servers in alternating hot and cold aisles is one of the most effective ways to manage airflow (Figure 4). This allows delivery of cold air to the fronts of the servers, while waste heat is concentrated and collected behind of the racks. As part of this configuration, operators can close off gaps within and between server racks to help minimize flow and mixing of air between hot and cold aisles. LBNL researchers found that, with hot-cold isolation, air-conditioner fans could maintain proper temperature while operating at a lower speed, resulting in 75 percent energy savings for the fans alone.
Figure 4: How to set up a hot aisle–cold aisle configuration
The hot aisle–cold aisle concept confines hot and cold air to separate aisles. Limiting the mixing of hot and cold air means that less energy is required to provide cooling to the servers.
Reduce bypass airflow losses. In data centers that have poor airflow-management strategies or significant air leakage, so-called bypass airflow can occur when cold conditioned air is cycled back to the computer room air conditioner before it’s had a chance to cool any equipment, resulting in wasted energy. In fact, an LBNL study found that up to 60 percent of the total cold air supply can be lost via bypass airflow. The main causes of bypass-airflow losses are unsealed cable cutout openings and poorly placed perforated tiles in hot aisles. This type of problem can be easily eliminated by identifying bypasses during a study of the cooling system’s airflow patterns. A white paper by Schneider Electric, How to Fix Hot Spots in the Data Center (PDF), provides guidance and links to additional resources related to airflow analysis and troubleshooting for cooling systems.
Bring in more fresh air. When the outdoor temperature and humidity are mild, economizers can save energy by bringing in cool outside air rather than using refrigeration equipment to cool the building’s return air. Economizers have two benefits: They have lower capital costs than many conventional systems, and they reduce energy consumption by making use of “free” cooling when ambient outside temperatures are sufficiently low. In northern climates, this may be the case the majority of the year. It’s important to put economizers on a recurring maintenance schedule to ensure that they remain fully operational. Pay attention, for example, to the dampers: If they are stuck open, they can vastly increase HVAC energy consumption and the problem can go unnoticed for a long time.
Use evaporative cooling. When conditions are right, water can be evaporatively cooled enough in a cooling tower that the building’s compressor and refrigerant loops can be bypassed. In northern climates, the opportunity for free cooling with a tower/coil approach (sometimes called a water-side economizer) can exceed 75 percent of the total annual operating hours. In southern climates, free cooling may only be available during 20 percent of operating hours. While this type of economizer is operating, the free cooling that it can provide can reduce the energy consumption of a chilled-water plant by up to 75 percent.
Upgrade your chiller. Many general best practices for chilled-water systems (such as centrifugal and screw chillers) also apply to cooling systems for data centers, including using variable-speed chillers for pumping and for optimizing chilled-water temperatures. If your facility isn’t already taking advantage of these techniques, consult with an HVAC expert—you may find that highly cost-effective savings are available.
Install ultrasonic humidification. Ultrasonic humidifiers use less energy than other humidification technologies because they don’t boil the water or lose hot water when flushing the reservoir. Additionally, the cool mist they emit absorbs energy from the supply air and causes a secondary cooling effect. This is particularly effective in a data center application with concurrent humidification and cooling requirements. One US Department of Energy case study demonstrated that when a data center’s humidification system was retrofitted with an ultrasonic humidifier, it reduced humidification energy use by 96 percent. However, ultrasonic humidification systems in data centers generally require very pure water, so it’s important to factor in the cost and energy consumption of a high-quality water filtration system, such as one that uses reverse osmosis. If a water filter isn’t used, a thin layer of minerals can build up on server components and short out the electronics.
Cool server cabinets directly. Although some facility managers have a reflexive aversion to having water anywhere near their computer rooms, some cooling systems bring the water very close—all the way to the base of the server cabinets or racks. This practice allows the cabinets to be cooled much more directly and efficiently than they would be if the entire room were cooled (Figure 5). Many vendors offer direct-cooled server racks, and several industry observers have speculated that the future of heat management in data centers will be dominated by direct-cooling techniques.
Figure 5: Direct-cooled server racks increase cooling efficiency
Rather than cooling the servers indirectly by cooling the entire room, this direct design circulates cool liquid below the server cabinet, allowing heat to be rejected and cooling the server much more efficiently.
Give the servers a bath. Another direct-cooling approach is liquid-immersion server cooling. It’s a relatively new technique, but one that’s quickly gaining interest thanks to early demonstrations that have yielded exceptional energy savings and system performance.
One approach being promoted by Green Revolution Cooling (GRC) is to submerge high-performance blade servers in a vat of inexpensive nonconductive mineral oil that is held at a specific temperature and slowly circulated around the servers. This technique saves energy for two reasons: First, the mineral oil’s heat capacity is more than 1,000 times greater than that of air, resulting in improved heat transfer. Second, the oil pulls heat directly off the electrical components that need to be cooled, instead of just removing heat from the air around the server. These factors enable cooling efficiency to be greatly improved and allows the working fluid to operate at a warmer temperature than would otherwise be possible with air-cooling. A 2014 Submersion Cooling Evaluation (PDF) from Pacific Gas and Electric Co.’s Emerging Technologies Program found that GRC’s system was able to yield more than 80 percent savings in energy consumption and peak demand for cooling.
In addition to energy savings, GRC’s approach may also offer a number of non-energy benefits. For example, the enhanced heat transfer means that the computing power density of the servers can be increased beyond densities that are possible with current air-based cooling strategies. And existing servers can be run faster without any negative impact. Both of these benefits might appeal to data center operators. In new construction, there are even more potential benefits from using a liquid-submersion cooling system because a wide range of cooling-related infrastructure and design elements (such as raised floors) can be eliminated.
Let servers heat the building. The National Renewable Energy Laboratory (NREL) uses a warm-water liquid heat-recovery approach to cool its Peregrine supercomputer and simultaneously heat the building. Sealed dry-disconnect heat pipes circulate water past the cores, then capture and reuse the recovered waste heat as the primary heat source for the data center, offices, and laboratory space. The recovered waste heat is also used to condition ventilation make-up air. NREL estimates that its liquid-based cooling technique saves $1 million annually in avoided energy costs.