By Jonathan Marshall
When robber Willie Sutton wanted some quick cash, he went where the money was—banks. By the same principle, though more benignly, when utilities want to save energy, they head to where the demand is — including big data centers.
Data centers cram energy-hogging computers into tight cages like hens in an Arkansas chicken factory. By one estimate they consumed more than 85 billion kilowatt-hours of electricity in 2010 across the country. Since about 20 percent of them are located in the Pacific region—many in Silicon Valley and other parts of Northern California—PG&E has long promoted the latest energy-efficient practices in the industry as part of its energy-efficiency mission.
Usually overlooked, however, is the contribution of data centers to peak energy demand. Utilities and state policy makers care about peak demand because it’s expensive and sometimes dirty to provide. Serving peaks requires reserving mostly idle generation capacity for those special times of day—or even special times of the year—when customers all want it at the same time. The alternative would be to accept “brownouts” when the system hits maximum capacity.
To avoid spending more and more on peak generation capacity, utilities like PG&E promote “demand response” programs, offering financial incentives to motivate customers to reduce peak demand, or shift it to non-peak times. Tens of thousands of PG&E’s residential customers take part, along with many larger commercial and industrial customers.
Can data centers, which consume about 500 megawatts of peak power in PG&E’s service area, reduce their loads when called upon? Or, with their 24/7 operating requirements, are they too inflexible to respond? That’s a critical question several researchers from Lawrence Berkeley National Laboratory’s Demand Response Research Center set out to start answering last year, with help and funding from PG&E’s Demand Response Emerging Technologies program, as well as the California Energy Commission and San Diego Gas & Electric.
The researchers conducted field tests at four data centers: Lawrence Berkeley National Laboratory, NetApp, Inc., San Diego Super Computer Center, and the University of California Berkeley. Using sophisticated workload management software, environmental monitoring sensors and other tools primarily used for energy efficiency, they found promising opportunities for reducing loads from servers and storage devices (by deferring non-critical jobs like data backups), and from site infrastructure (by shifting cooling out of peak demand periods) with minimal or no impact to operations. In some cases, local energy demand was cut by shifting computing jobs to out-of-region data centers.
“We have seen 20 percent to 30 percent reductions in peak energy use with data centers,” said Girish Ghatikar, one of the lab’s researchers. “We think 7 percent to 10 percent might be a realistic industry average. In an area like PG&E’s territory with many data centers, the impact could be large.”
The first phase of tests on California’s data centers used manual controls to manipulate energy demand. Now that the potential has been demonstrated, “vendors are asking me how they can automate the process,” Ghatikar added. Their goal is to keep technology costs low by repurposing equipment that is already used in data centers to maximize energy efficiency. “If technology vendors can make it work for demand response, that would be of great value both for reducing data center operating costs, and improving electric utility and grid reliability,” he said.
Email Jonathan Marshall at firstname.lastname@example.org.