Data centers suck power like nobody's business, so a 30% bump in energy efficiency is automatically intriguing from a cost-savings standpoint, but the value of 48V racks go way beyond the virtue of going green. Those energy savings have profound ramifications for how data centers can accommodate the skyrocketing demands of cloud computing.
Commonly in data centers, power hits the rack at 48V, gets stepped down to 12V, and then undergoes another conversion to 1V (digital electronics operate at 1V, including the electronics in the servers stacked in racks). The proposal is to build a rack that runs power at 48V all the way to the motherboard before being stepped down to 1V.
Dropping the voltage in the rack in a single step is a relatively new capability, predicated on advances made in the last few years in manufacturing gallium nitride (GaN) semiconductors.
"The reason we stop at 12V is because of the limitations of silicon," specifically in power MOSFETs, explains Alex Lidow, CEO of Efficient Power Conversion (EPC), a company that specializes in GaN circuitry. "The speed of your device -- how fast it can switch -- determines how far it can reach in terms of input voltage to output voltage. Because of the much higher switching speeds of gallium nitride you can efficiently go from 48V all the way down to 1V all in one stage."
EPC supplies Texas Instruments Inc. (NYSE: TXN) with GaN transistors that TI incorporates in its LMG5200 modules designed for 48V to 1V conversion. TI claims its modules operate at 90% to 91% efficiency, Lidow notes.
In comparison, the efficiency of multi-stage voltage conversion with silicon MOSFETs maxes out somewhere in the 77% to 78% range. When you go from silicon-based conversion to GaN-based conversion, "you cut your power losses in half, and you improve your server power efficiency by 10% with just that one act," Lidow says.
Both Google and Facebook have been building 48V racks for a couple of years. Google says it has saved millions of dollars and millions of kilowatt hours.
Urs Hölzle, senior vice president of technical infrastructure at Google, spoke at the Open Compute Summit earlier this year, and testified that the reduction of conversion steps has resulted in a 30% improvement in energy efficiency. The figure includes not just the savings from minimizing power losses, but other downstream energy costs. For example, with that much less heat being dissipated, cooling and ventilation requirements -- and costs -- can be reduced.
To get an idea of how important the savings are to Google, it finally joined the Open Compute Project (OCP) earlier this year. The current reference design for OCP racks specifies a 12V supply, but even Google benefits from economies of scale. Its initial donation to OCP was a specification for an open 48V rack. (See Google Joins Facebook-Backed Open Compute Project)
To get an idea of how important the savings are to Facebook, it has collaborated with Google on the new version of the OCP Open Rack spec, Open Rack 2.0, based on a 48V design. They recently delivered it to OCP.
Beyond the energy savings, it is becoming more crucial to minimize heat loss in data centers as more data center clients come to rely on more compute-intensive applications.
Lidow explains: "There's a big change in the dynamics of servers as we go to cloud computing, deep learning, and artificial intelligence. Those are creating greater inside-the-server-farm activity; the in-and-out-of-the-server-farm activity is growing at a modest growth pace, but not as extraordinarily fast. You are getting inputs into the data center that are causing the data center to hum with activity, and then it sends something out. The ins and outs aren't growing as much, but inside it is. Server farms have to be more and more dense, and have to be extremely colocated within the data center. So power density becomes the true physical limit -- busing power to them and getting the heat out."
Compute efficiency is dependent on the distance between compute elements, including the proximity of one rack of servers to the next, but excessive heat compromises electronics. How close a data center operator can pack in racks of servers is limited, therefore, by environmental conditions that include heat dissipation of the equipment involved and the ability to bleed that heat out of the center.
Moving to 48V racks helps. Google hasn't stated a cause-and-effect relationship, but it is noteworthy that the company has said that it has shallower racks and shallower aisles than the rest of the industry. The shallower rack is part of the Open Rack 2.0 spec.
There is yet another beneficial consequence of moving to a 48V rack.
"When you're running current around at low voltage, you need very thick copper, and it takes up a lot of space," Lidow explains. "If you run it around at 48V, it basically has one-sixteenth the amount of losses, so the size of the wire is therefore one-sixteenth the size. You have these servers dominated by power, and a lot of that copper just falls away when you go to 48V."
— Brian Santo, Senior Editor, Components, T&M, Light Reading