x
100G Ethernet

Facebook: Yes, We Need 100-GigE

It's become cliché to say that companies like Facebook would use 100-Gbit/s Ethernet right now if they had it. But it helps when someone from Facebook actually shows up and hammers on that point.

Facebook network engineer Donn Lee did that yesterday, pleading his case at a technology seminar on 40- and 100-Gbit/s Ethernet, hosted in Santa Clara, Calif., by The Ethernet Alliance .

Representatives from Google (Nasdaq: GOOG) and the Amsterdam Internet Exchange B.V. (AMS-IX) gave similar pleas, but Lee's presentation included some particularly sobering numbers. He said it's reasonable to think Facebook will need its data center backbone fabric to grow to 64 Tbit/s total capacity by the end of next year.

How to build such a thing? Lee said his ideal Ethernet box would have 16-Tbit/s switching capacity and 80 100-Gbit/s Ethernet ports or 800 10-Gbit/s Ethernet ports.

No such box exists commercially, and Lee is reluctant to go build his own.

That leaves him with an unpleasant alternative. Lee drew up a diagram of what Facebook's future data center fabric -- that is, the interconnection of its switch/routers -- would look like if he had to use today's equipment and 10-Gbit/s Ethernet. Instead of the familiar criss-crossing mesh diagram, he got a solid wall of black, signifying just how many connections he'd need.

"I would say anybody in the top 25 Websites easily has this problem," he said later. (Lee didn't say anything about how long it would take just to plug in all those fibers. Maybe that job could be created by funds from the U.S. Recovery Act.)

Lee also showed charts showing the disconnect between Facebook's wish list and the market. Facebook needed 512 10-Gbit/s Ethernet ports per chassis in 2007 and is likely to need 1,500 in 2010. No chassis offers more than 200 ports, he said.

Even though Lee is a veteran of Google and Cisco Systems Inc. (Nasdaq: CSCO), you might wonder if he's just one renegade engineer who doesn't represent the Facebook norm. Not really. It turns out Facebook has only five network engineers -- although Lee said that's a 20 percent increase from the spring of 2008 [math note: which means they had approximately 4.165 engineers at that time].

Even though 100-Gbit/s development started four years ago, Lee thinks it came too late, and that's got him worried about the next generation. He's pulling for 400-Gbit/s Ethernet discussions to start right away.

"Let's start the work that doesn't require money, now," he said. "If we have the standard, we can build the product later. I don't mind using an old standard."

He might get his wish. The Optoelectronics Industry Development Association (OIDA) is already organizing meetings with an aim toward getting federal money for terabit Ethernet research, said John D'Ambrosia, a Force10 Networks Inc. scientist who helped organize yesterday's event.

Of course, money is a major obstacle to the next wave of Ethernet.

During an open commenting and Q&A session, multiple audience members pointed out that optical components margins are too thin to support advanced research at many companies, and that carriers are seeing their big, expensive networks getting used to make money for over-the-top services. "There's no revenue in all that bandwidth increase," one audience member commented, citing the carrier case in particular.

— Craig Matsumoto, West Coast Editor, Light Reading

Page 1 / 2   >   >>
Pete Baldwin 12/5/2012 | 3:56:22 PM
re: Facebook: Yes, We Need 100-GigE

 

Lee presented graphs of how Facebook traffic can surge at certain times. The audience got to guess what the causes were.

Here are a few of his examples (purely from memory, so they don't have five-9s accuracy).  Take your best guesses. No fair if you were at the meeting, or if you work for a social networking site!

1. Nov. 1, 2008, most of the morning.

2. Feb. 1, 2009, sporadic spikes throughout the evening (Eastern time)

3. June 25, 2009, one sudden, enormous burst, trailing off hyperbolically

4. Entire week of Aug. 31, 2009 -- traffic consistently higher than the previous week, for 24 hours every day (but retaining the same shape, just shifted upwards)

ninjaturtle 12/5/2012 | 3:56:21 PM
re: Facebook: Yes, We Need 100-GigE INFN to the rescue...sorry couldn't resist. Google already is a customer. Is it feasible to "Kevin Bacon" that to INFN also???? Ka-ching!
TrojanReal 12/5/2012 | 3:56:20 PM
re: Facebook: Yes, We Need 100-GigE

" Lee drew up a diagram of what Facebook's future data center fabric -- that is, the interconnection of its switch/routers -- would look like if he had to use today's equipment and 10-Gbit/s Ethernet. Instead of the familiar criss-crossing mesh diagram, he got a solid wall of black, signifying just how many connections he'd need."

Can someone clarify, if the  problem described above is primarily due to   having fixed optical connections  between sites  (as opposed to dynamic  optical connection with timescales  of  nsec,usec duration @ 10Gbps as in optical packet or burst switching)? Or  if it is due to each connection requiring 100G bandwidth.? 

Thanks
TR

chaz6 12/5/2012 | 3:56:20 PM
re: Facebook: Yes, We Need 100-GigE

I do wonder why they desperately need to use ethernet. Take for example the Mellanox IS5600; it has 648 ports of 40Gbs with a switching capacity of 51.8 Tb/s, far exceeding the number bandied about by Facebook.

savy.tech 12/5/2012 | 3:56:19 PM
re: Facebook: Yes, We Need 100-GigE

Answer to your puzzle


------------------------


June 25, 2009, one sudden, enormous burst, trailing off hyperbolically


Death of Michael Jackson caused the traffic spikes on most of the websites including Twitter and other news sites..


abashford 12/5/2012 | 3:56:16 PM
re: Facebook: Yes, We Need 100-GigE

"No such box exists commercially, and Lee is reluctant to go build his own."


They could get a lot more interest from manufacturers if they could indicate to the market that more than 2 of these 'boxes' are required. :)


I have no knowledge of Facebook's architecture, does anyone know how many major data centres they maintain?  Is it 2, or are they more distributed?  Thanks.

Pete Baldwin 12/5/2012 | 3:56:13 PM
re: Facebook: Yes, We Need 100-GigE

abashford -- He'd need 16 of them, if I undertand correctly.  And more in the future, of course -- but not thousands of them, admittedly.


During the Q&A, someone actually brought up your point about volumes, and that's when Lee made the "top 25 web sites" remark, IIRC.  He believes there's a hungry high-end market.


(Google apparently made similar comments earlier in the day, but I wasn't present for that session.  People wanted to bring him into the Facebook discussion, but he'd already left.)

Pete Baldwin 12/5/2012 | 3:56:13 PM
re: Facebook: Yes, We Need 100-GigE

TrojanReal -- It's due to needing 100 Gbit/s bandwidth.  He also said there are spots where they'd be link-aggregating 100GEs, if they could.

abashford 12/5/2012 | 3:56:12 PM
re: Facebook: Yes, We Need 100-GigE

Sounds like a large investment for a potentially small market.  But then, that is what people were saying about the CRS-1 before it sold in the Nx100's.  If you are the only option, it could be a lucrative market.


It would seem more efficient to actually come up with a cost-effective and simple way to solve the meshing of multiple smaller devices to look like one large one.  That way, you can invest in the development of a device that has a much larger target market.  I have yet to see anyone pull this off however...


Was there any discussion of potential technologies to do this?


 

Pete Baldwin 12/5/2012 | 3:56:12 PM
re: Facebook: Yes, We Need 100-GigE

Ah - the "20 percent increase" cited should be 25 percent.  That's my math mistake, not Lee's (the parenthetical was added in edit).


I'm losing my math chops.  I blame Twitter.

Page 1 / 2   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE