Google Wants Variable-Rate Ethernet

It's a fringe requirement, but somehow, we're guessing we'll hear more about it

Craig Matsumoto, Editor-in-Chief, Light Reading

March 27, 2013

2 Min Read
Google Wants Variable-Rate Ethernet

The Institute of Electrical and Electronics Engineers Inc. (IEEE) isn't working on a variable-speed Ethernet standard yet, but Google is pushing for one. It was a point of focus for Bikash Koley, Google's principal architect and manager of network architecture, during a panel session on the last day of OFC/NFOEC last week. What he wants is variable-speed Ethernet. So, instead of running a connection at 100 Gbit/s or 400 Gbit/s, which are the two standard choices, he'd like to pick arbitrary speeds. And he offered a bit of a carrot to the component and systems vendors in the audience: "The reason we keep forcing the industry to reduce cost of goods is because we cannot use them efficiently," Koley said. The technology on the optical side is actually ready for what he's asking. Variable-speed transceivers and flexible-grid ROADMs exist. What's missing is on the packet side: a media access control (MAC) layer that's capable of dealing with a variable-bit-rate physical (PHY) layer. "We have an adaptive programmable PHY layer ready to go but we don't have the glue to the packet layer," he said. Fans of a flexible-rate physical layer are still considered a fringe camp, just like -- well, like fans of the TV show Fringe, maybe. But this is the second time in six months that the issue has come up at a conference. The first time was in conversation at Light Reading's Ethernet Expo, where Ron Johnson, a director of product management at Cisco Systems Inc., brought up the idea. He was disappointed in the IEEE sticking with the usual single-rate format when it came to 400 Gbit/s. (See Why a 400G Standard Might Draw Complaints.) Right now, if Google wanted, say, 135 Gbit/s between two data centers, it would have to create a 200Gbit/s pipe, or maybe 150 Gbit/s by link-aggregating 15 10Gbit/s lines. Either way, Koley has two problems with that kind of over-provisioning. First, Koley said, it's an inefficient use of what turns out to be the most expensive part of his network, namely, the cost to connect data centers across thousands of kilometers. Google is loathe to leave capacity on those pipes unused. Second, higher-speed links generally have to travel shorter distances. "It's not about just the capacity. If I have to go and put a regen [i.e., optical regeneration] every 600 km, that fundamentally is a no-go," he said. It was an interesting perspective at a conference that was obsessed with shrinking the power, size and cost of optical modules. "Those [features] have actually very small incremental value. The real value you have in a network of our scale is in optimizing the resource you have," Koley said. Granted, not everybody is Google. But Google has a way of forcing a conversation. — Craig Matsumoto, Managing Editor, Light Reading Review all our OFC/NFOEC coverage at

About the Author(s)

Craig Matsumoto

Editor-in-Chief, Light Reading

Yes, THAT Craig Matsumoto – who used to be at Light Reading from 2002 until 2013 and then went away and did other stuff and now HE'S BACK! As Editor-in-Chief. Go Craig!!

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like