Data Center

Facebook Releases Data Center Tech

SAN JOSE, Calif. -- OCP U.S. Summit 2015 -- Network operators looking to buy or build technology Facebook uses to run its massive data centers will get their wish.

Facebook designs its own switches, racks and networking software to achieve the hyperscale needs of its global service. Now, Facebook says it's opening up key networking and data center technologies. These include its Wedge switch, board management software, and a server it's calling Yosemite.

Why is Facebook making all this technology public, rather than keeping it as proprietary jewels? In a word, collaboration. Facebook wants "to work with not just the best minds under one roof, but the best minds in the world -- and that's where the Open Compute Project [OCP] comes in," the company said in a post on its blog.

The OCP, an independent nonprofit launched by Facebook in 2011, works in conjunction with thousands of participants and 200 companies to develop open source data center hardware designs.

Facebook uses OCP designs to power its services. Its new Altoona, Iowa, data center, which went online in November, is 100% OCP gear, Facebook engineering VP Jay Parikh said in a presentation here.

Facebook's OCP bet has big stakes. The company's data centers support 1.39 billion users on Facebook itself, with 500 million using Facebook Messages and 300 million users on Instagram.

With that kind of workload, efficiency was essential. "We had this approach of working on efficiency from the get-go," Parikh said.

Facebook has saved over $2 billion over the past three years using OCP technology. Power saved is the equivalent of 80,000 homes per year, with reduced carbon equivalent to taking 95,000 cars off the road annually, Parikh said.

Disaggregation has been key to improved efficiency. "Break down building blocks to small components and use the components to build rapidly what you need in the business," Parikh said.

For example, the Facebook News Feed -- the rapidly updating stream of friends' activity that every Facebook user sees when they log in -- is complicated and resource intensive.

Multifeed is a distributed backend system involved in News Feed. When a person visits their Facebook feed, "Multifeed looks up the user's friends, finds all their recent actions, and decides what should be rendered," the company said in a blog post.

Previously, Facebook put Multifeed components on a single server. But as the algorithms deciding what content to put in the feed got more sophisticated and content got richer, putting those components together on a single server wasn't working. Now, by splitting the components Facebook can optimize threading and memory management. As the Facebook product evolves, it can adjust the ratio of server types without wasting computing resources, Parikh says.

And now Facebook is opening key technologies to the public.

Specifically, Facebook wants to contribute the specs of its top-of-rack Wedge switch to the OCP. The OCP will need to decide whether to accept the specs. (See Facebook in Production Testing of Open 'Wedge' Switch.)

Network operators that don't want to build Wedge switches for themselves can buy the kit off-the-shelf. Accton Technology Corp. plans to sell Wedge switches in the first half of of the year. And Cumulus Networks and Big Switch Networks will support the hardware with their SDN software.

Facebook also released OpenBMC, open low-level board management software to speed up feature development for BMC chips. Wedge will be the first hardware supporting OpenBMC, followed by Facebook's 6-pack switch.

And Facebook introduced the FBOSS Agent, opening the central library of its FBOSS Wedge software. The agent is built on the Broadcom Corp. (Nasdaq: BRCM) OpenNSL Library to program the Broadcom ASIC inside Wedge.

Find out more about key developments related to the systems and technologies deployed in data centers on Light Reading's data center infrastructure channel

Additionally, Facebook introduced Yosemite, a system-on-a-chip compute server to dramatically increase speed and serve Facebook traffic more efficiently. It "supports four independent servers at a performance-per-watt superior to traditional data center servers for heavily parallelizable workloads," Facebook said.

Facebook is working with Intel Corp. (Nasdaq: INTC) and Mellanox Technologies Ltd. (Nasdaq: MLNX) on Yosemite. Intel is providing the new Xeon D-1500 processor, while Mellanox provides NICs.

Facebook's homebrew hardware and software drives staggering amounts of traffic, and will help services providers meet their customers' demands. It'll be interesting to see what other hardware and software Facebook develops and makes available, and how much competitive pressure that will put on networking and IT vendors.

— Mitch Wagner, Circle me on Google+ Follow me on TwitterVisit my LinkedIn profileFollow me on Facebook, West Coast Bureau Chief, Light Reading. Got a tip about SDN or NFV? Send it to [email protected]

Mitch Wagner 3/13/2015 | 11:36:10 AM
Re: So the magic potion is being distributed .... Yeah, if one of your big marketing pitches is that Facebook won't share, it's awkward if Facebook turns around and shares a short time later!
Mitch Wagner 3/13/2015 | 11:19:52 AM
Re: Not all of the secret sauce... Not many want to roll their own data centers, but more might be willing to pay someone else to roll data centers using Facebook technology. 
jbtombes 3/12/2015 | 9:04:32 AM
Re: So the magic potion is being distributed .... The tide is shifting. Google becoming an MVNO - albeit small one - another bit of news from MWC. Facebook pushing Internet.org. And in addition to moving toward independence from carriers, FB is here breaking free from traditional technology vendors - or at least forging brand new kinds of relationships with them. 
nasimson 3/11/2015 | 11:42:52 PM
Daring move Didn't expect facebook would reveal its secret sauce to the world. Appears that Zuckerberg thinks BIG in every domain. Facebook has no competition near it, at least for now, so it can afford such daring moves.
mhhf1ve 3/11/2015 | 10:07:57 PM
Not all of the secret sauce... Facebook can do this because it's not their main business to create hardware/software stacks -- that's just their biggest expenditure. So it doesn't disrupt facebook's business at all if the talent to support their infrastructure gets widespread and cheaper. Google does similar things, too, but you'll notice none of FB or Google's proprietary algorithms are open source. 

It'll be interesting to see how Cisco and the like respond to these open projects that directly target their main business. Not too many corporations will want to "roll their own" datacenters like Facebook and Google, but someday it might be much much easier to do so with open source packages -- and then it'll be a matter of paying for support services. So I'm guessing Cisco might become more like IBM with consulting and IT services becoming more important. Or I suppose there might be some kind or Oracle analogy in here....
somanvenugopal 3/11/2015 | 8:58:36 AM
So the magic potion is being distributed .... So will this have a grave impact to players like Ericsson who quite recently at MWC launched similar solutions in partnership with Intel, and the key argument was that players like FB, Google and so on were reluctant to share their secret with the larger world !!

Will be an interesting watch of how it unfolds ! Looks like a planned move ........
Sign In