Light Reading

Facebook: Yes, We Need 100-GigE

Craig Matsumoto
News Analysis
Craig Matsumoto
9/16/2009
50%
50%
Repost This

It's become cliché to say that companies like Facebook would use 100-Gbit/s Ethernet right now if they had it. But it helps when someone from Facebook actually shows up and hammers on that point.

Facebook network engineer Donn Lee did that yesterday, pleading his case at a technology seminar on 40- and 100-Gbit/s Ethernet, hosted in Santa Clara, Calif., by The Ethernet Alliance .

Representatives from Google (Nasdaq: GOOG) and the Amsterdam Internet Exchange B.V. (AMS-IX) gave similar pleas, but Lee's presentation included some particularly sobering numbers. He said it's reasonable to think Facebook will need its data center backbone fabric to grow to 64 Tbit/s total capacity by the end of next year.

How to build such a thing? Lee said his ideal Ethernet box would have 16-Tbit/s switching capacity and 80 100-Gbit/s Ethernet ports or 800 10-Gbit/s Ethernet ports.

No such box exists commercially, and Lee is reluctant to go build his own.

That leaves him with an unpleasant alternative. Lee drew up a diagram of what Facebook's future data center fabric -- that is, the interconnection of its switch/routers -- would look like if he had to use today's equipment and 10-Gbit/s Ethernet. Instead of the familiar criss-crossing mesh diagram, he got a solid wall of black, signifying just how many connections he'd need.

"I would say anybody in the top 25 Websites easily has this problem," he said later. (Lee didn't say anything about how long it would take just to plug in all those fibers. Maybe that job could be created by funds from the U.S. Recovery Act.)

Lee also showed charts showing the disconnect between Facebook's wish list and the market. Facebook needed 512 10-Gbit/s Ethernet ports per chassis in 2007 and is likely to need 1,500 in 2010. No chassis offers more than 200 ports, he said.

Even though Lee is a veteran of Google and Cisco Systems Inc. (Nasdaq: CSCO), you might wonder if he's just one renegade engineer who doesn't represent the Facebook norm. Not really. It turns out Facebook has only five network engineers -- although Lee said that's a 20 percent increase from the spring of 2008 [math note: which means they had approximately 4.165 engineers at that time].

Even though 100-Gbit/s development started four years ago, Lee thinks it came too late, and that's got him worried about the next generation. He's pulling for 400-Gbit/s Ethernet discussions to start right away.

"Let's start the work that doesn't require money, now," he said. "If we have the standard, we can build the product later. I don't mind using an old standard."

He might get his wish. The Optoelectronics Industry Development Association (OIDA) is already organizing meetings with an aim toward getting federal money for terabit Ethernet research, said John D'Ambrosia, a Force10 Networks Inc. scientist who helped organize yesterday's event.

Of course, money is a major obstacle to the next wave of Ethernet.

During an open commenting and Q&A session, multiple audience members pointed out that optical components margins are too thin to support advanced research at many companies, and that carriers are seeing their big, expensive networks getting used to make money for over-the-top services. "There's no revenue in all that bandwidth increase," one audience member commented, citing the carrier case in particular.

— Craig Matsumoto, West Coast Editor, Light Reading

(15)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View
Page 1 / 2   >   >>
Pete Baldwin
50%
50%
Pete Baldwin,
User Rank: Light Beer
12/5/2012 | 3:56:22 PM
re: Facebook: Yes, We Need 100-GigE


 

Lee presented graphs of how Facebook traffic can surge at certain times. The audience got to guess what the causes were.

Here are a few of his examples (purely from memory, so they don't have five-9s accuracy).  Take your best guesses. No fair if you were at the meeting, or if you work for a social networking site!

1. Nov. 1, 2008, most of the morning.

2. Feb. 1, 2009, sporadic spikes throughout the evening (Eastern time)

3. June 25, 2009, one sudden, enormous burst, trailing off hyperbolically

4. Entire week of Aug. 31, 2009 -- traffic consistently higher than the previous week, for 24 hours every day (but retaining the same shape, just shifted upwards)

ninjaturtle
50%
50%
ninjaturtle,
User Rank: Light Beer
12/5/2012 | 3:56:21 PM
re: Facebook: Yes, We Need 100-GigE
INFN to the rescue...sorry couldn't resist. Google already is a customer. Is it feasible to "Kevin Bacon" that to INFN also???? Ka-ching!
TrojanReal
50%
50%
TrojanReal,
User Rank: Light Beer
12/5/2012 | 3:56:20 PM
re: Facebook: Yes, We Need 100-GigE


" Lee drew up a diagram of what Facebook's future data center fabric -- that is, the interconnection of its switch/routers -- would look like if he had to use today's equipment and 10-Gbit/s Ethernet. Instead of the familiar criss-crossing mesh diagram, he got a solid wall of black, signifying just how many connections he'd need."

Can someone clarify, if the  problem described above is primarily due to   having fixed optical connections  between sites  (as opposed to dynamic  optical connection with timescales  of  nsec,usec duration @ 10Gbps as in optical packet or burst switching)? Or  if it is due to each connection requiring 100G bandwidth.? 

Thanks
TR

chaz6
50%
50%
chaz6,
User Rank: Light Beer
12/5/2012 | 3:56:20 PM
re: Facebook: Yes, We Need 100-GigE


I do wonder why they desperately need to use ethernet. Take for example the Mellanox IS5600; it has 648 ports of 40Gbs with a switching capacity of 51.8 Tb/s, far exceeding the number bandied about by Facebook.

savy.tech
50%
50%
savy.tech,
User Rank: Light Beer
12/5/2012 | 3:56:19 PM
re: Facebook: Yes, We Need 100-GigE


Answer to your puzzle


------------------------


June 25, 2009, one sudden, enormous burst, trailing off hyperbolically


Death of Michael Jackson caused the traffic spikes on most of the websites including Twitter and other news sites..


abashford
50%
50%
abashford,
User Rank: Light Beer
12/5/2012 | 3:56:16 PM
re: Facebook: Yes, We Need 100-GigE


"No such box exists commercially, and Lee is reluctant to go build his own."


They could get a lot more interest from manufacturers if they could indicate to the market that more than 2 of these 'boxes' are required. :)


I have no knowledge of Facebook's architecture, does anyone know how many major data centres they maintain?  Is it 2, or are they more distributed?  Thanks.

Pete Baldwin
50%
50%
Pete Baldwin,
User Rank: Light Beer
12/5/2012 | 3:56:13 PM
re: Facebook: Yes, We Need 100-GigE


abashford -- He'd need 16 of them, if I undertand correctly.  And more in the future, of course -- but not thousands of them, admittedly.


During the Q&A, someone actually brought up your point about volumes, and that's when Lee made the "top 25 web sites" remark, IIRC.  He believes there's a hungry high-end market.


(Google apparently made similar comments earlier in the day, but I wasn't present for that session.  People wanted to bring him into the Facebook discussion, but he'd already left.)

Pete Baldwin
50%
50%
Pete Baldwin,
User Rank: Light Beer
12/5/2012 | 3:56:13 PM
re: Facebook: Yes, We Need 100-GigE


TrojanReal -- It's due to needing 100 Gbit/s bandwidth.  He also said there are spots where they'd be link-aggregating 100GEs, if they could.

abashford
50%
50%
abashford,
User Rank: Light Beer
12/5/2012 | 3:56:12 PM
re: Facebook: Yes, We Need 100-GigE


Sounds like a large investment for a potentially small market.  But then, that is what people were saying about the CRS-1 before it sold in the Nx100's.  If you are the only option, it could be a lucrative market.


It would seem more efficient to actually come up with a cost-effective and simple way to solve the meshing of multiple smaller devices to look like one large one.  That way, you can invest in the development of a device that has a much larger target market.  I have yet to see anyone pull this off however...


Was there any discussion of potential technologies to do this?


 

Pete Baldwin
50%
50%
Pete Baldwin,
User Rank: Light Beer
12/5/2012 | 3:56:12 PM
re: Facebook: Yes, We Need 100-GigE


Ah - the "20 percent increase" cited should be 25 percent.  That's my math mistake, not Lee's (the parenthetical was added in edit).


I'm losing my math chops.  I blame Twitter.

Page 1 / 2   >   >>
Flash Poll
LRTV Documentaries
Cable Eyes Big Technology Shifts

4|16|14   |   03:02   |   (0) comments


US cable engineers are facing a lot of heavy lifting in the coming years, notes Light Reading Cable/Video Practice Leader Alan Breznick.
LRTV Custom TV
Maximizing Customer Experience & Assuring Service Delivery in an IP World

4|15|14   |   4:57   |   (0) comments


Steven Shalita, VP of Marketing, NetScout Systems, Inc., discusses the challenges cable/MSO operators face in assuring the delivery of new IP-based services. Key points include the value of proactively managing performance, and using rich analytics and operational intelligence to better understand service and usage trends, make smarter business decisions and ...
LRTV Documentaries
Bye-Bye DVD: Consumers Embrace Digital Video

4|10|14   |   04:17   |   (6) comments


Veteran video analyst Colin Dixon, founder and principal analyst of nScreenMedia, says research shows 56% are using digital video already.
LRTV Documentaries
Video: TW Cable Puts Multicast Gateways to the Test

4|8|14   |   04:13   |   (1) comment


Tom Gonder, a chief architect at Time Warner Cable, explains how its trial of multicast gateways is impacting IP-based video plans.
LRTV Custom TV
Managing & Monetizing Big Data in Operator Environments

4|7|14   |     |   (1) comment


At Mobile World Congress, Gigamon's Director of Service Provider Solutions, Andy Huckridge, and Heavy Reading Analyst Sarah Wallace discuss the 'big data' issues facing carriers and operators today.
LRTV Huawei Video Resource Center
Data Center Energy – Build Your Data Center in a Modular Way

4|7|14   |   2:13   |   (0) comments


Dr. Fang Liangzhou, VP Network Energy Product Line, shared his thoughts about the challenges for data centers during CeBIT 2014.
LRTV Huawei Video Resource Center
Agile Network Solution – An Overview of Huawei's Agile Network Solution

4|7|14   |   2:31   |   (0) comments


Ajay Gupta, Director of Product Marketing, Networking Product Line, gives an overview of the Agile Network Solutions during CeBIT 2014.
LRTV Huawei Video Resource Center
Huawei’s eLTE Voice Trunking, Video and Data Applied for Railways

4|7|14   |   1:38   |   (0) comments


Gottfried Winter is the Sales Director at Funkwerk, a German specialist in GSM-r terminals and a long-time partner of Huawei. At CeBIT 2014, Winter talks to Light Reading about this partnership and the integration of enhanced voice trunking, video and data functions.
LRTV Huawei Video Resource Center
LeaseWeb Speaks Highly of Huawei's Datacenter Products

4|7|14   |   1:37   |   (0) comments


Rene Olde Olthof, Operations Director LeaseWeb, talks about the next data center transformation during CeBIT 2014.
LRTV Documentaries
Comcast: Reshaping the Cable Network Architecture

4|3|14   |   07:11   |   (8) comments


Shamim Akhtar, Comcast's architect and senior director of network strategy, explains why the cable company is moving to a more distributed network architecture.
LRTV Custom TV
VMware CEO Pat Gelsinger at Mobile World Congress

4|1|14   |   3:41   |   (0) comments


VMware CEO Pat Gelsinger speaks to Heavy Reading about the value of virtualization spanning from the data center to service provider networks to mobile devices.
LRTV Huawei Video Resource Center
Analysts' Impressions of Huawei SoftCOM at ONS 2014

4|1|14   |   1:11   |   (0) comments


After visiting the Huawei booth at ONS, Lee Doyle of Doyle Research gives his appraisal of Huawei's SoftCOM solution.
Hot Topics
Businesses Face 'Fiber Gap': Report
Dan O'Shea, Managing Editor, 4/9/2014
BlackBerry's Chen: We're Not Dumping Devices
Dan Jones, Mobile Editor, 4/10/2014
AT&T's Going to Carolina With 1 Gig
Sarah Reedy, Senior Editor, 4/10/2014
Cisco, Juniper Treating Gear Against Potential Heartbleed
Dan O'Shea, Managing Editor, 4/11/2014
Cisco & VMware Are Apple & Google of SDN
Mitch Wagner, West Coast Bureau Chief, Light Reading, 4/14/2014
Like Us on Facebook
Twitter Feed