This peer-to-peer networking session will answer your questions about decarbonization
The virtual summit takes place Wednesday, Sept. 27 from 1-3 p.m. ET. fnPrime members can register for free
The cooling infrastructure isn't the only place where Google has a sharp focus on efficiency. The servers in its data centers are custom-built and stripped down to use only what they need, so unnecessary items such as peripheral connectors or video cards are not included. The fans on the servers run only fast enough to keep the machines from overheating instead of simply running at full speed around the clock. And, in a departure from industry norms, the servers include backup batteries directly on the racks, which not only cuts down on the power demand, but cuts out two AC/DC conversion stages as power comes into the servers, which leads to less power loss.
That means the servers are not supported by the usual UPS structure, but, as Kava points out, the UPS is not meant to handle long-term outages, so anything beyond what the on-board battery can handle should be handled by a generator anyway.
"Even if you've got 10 minutes of battery, you're probably not going to fix the problem within 10 minutes," he says.
All of Google's efficiency efforts mean that its data centers run at a power usage efficiency (PUE) of 1.12. In other words, for every one watt that Google uses to power the IT equipment, it uses an additional .12 of a watt to power the facility the equipment is housed in. According to a 2012 study from the Uptime Institute, the industry average PUE is 1.88, which works out to 880 more watts needed for overhead power for every kilowatt used to power IT equipment. (To account for the on-board batteries taking the place of UPS, Google only counts the servers, storage, and networking equipment as IT power.)
This focus on efficiency becomes even more pronounced when you consider that, at Google scale, it's multiple megawatts that are being consumed. Which means that efficiency improvements, regardless of how small they are, add up quickly.
"If you make a very, very minor efficiency improvement across a data center that has 1 MW of critical load, it might take a very long time for that efficiency change to pay for itself," says Kava. "If you have that same efficiency change over a data center that maybe has 10 MW of critical load, well, then, the payback can be much faster."
Custom-built Servers, Onboard Batteries Help Google Meet Its IT Energy Efficiency Goals