x
DWDM

Google: 100 Gbit/s? How 'Bout 8 TBs Per Second?

NEW YORK -- Packet-Optical Transport Evolution -- Google is seeing its machine-to-machine traffic rise, and that's leading the company to reconsider the optical network that connects its massive data centers.

Senior network architect Bikash Koley described the company's thinking during yesterday's afternoon keynote.

The machine traffic comes from Google's obsession with providing the same user experience to anyone in the world. A search in Japan should be just as fast as one in the United States. But to produce search results, Google has to scan CPUs and storage units that are scattered throughout the world. Ideally Google's applications see all that computing as one pool, a massive cloud -- but the machines have to send a lot of messages to get the information they want.

On top of that, Google's data centers are near convenient network hubs. Google picks locations that have green options for electricity, such as hydroelectric power. And because it's got so many CPUs that need so much cooling, it prefers cold locations, Finland being a recent choice. In other words, these data centers aren't near any convenient network hubs.

So, the traffic from one data center to another has to span long distances and doesn't need any add/drop capabilities in the middle. "In many ways it pretty much looks like a submarine network -- you're going 5,000 km and doing nothing else," Koley said.

But the entire network can't be built like that, because Google's traffic to users -- answers to a search query, for instance -- must go through add/drop nodes.

So, Google is trying to decide how best to carry these two types of traffic. Would it be better to carry all the traffic together, putting the machine traffic on wavelengths that bypass the local stops? Should the machine traffic have its own network, a kind of optical express?

Studying the costs alone, Koley noted that the all-in-one network makes sense as long as the machine-to-machine traffic and the user traffic are about the same amount.

Were the machine traffic to double -- which is entirely possible, Koley stressed -- it becomes cheaper to build separate networks, one of which would carry the machine traffic along those uninterrupted, submarine-like distances.

Google still hasn't decided which architecture to use, Koley said. (He didn't specify what the company is doing now.)

Regardless of how the network looks, Google is dead set on one thing: it wants label-switched routing and DWDM capabilities to be combined into one box. It doesn't matter if that's a label-switched router (LSR) with DWDM added, or a DWDM box with Layer 3 knowledge added, Koley said. (He also stressed that the LSR doesn't have to be a full-blown router.)

Of course, Koley also sneaked an obligatory "We want 100 Gbit/s" plea into his talk: "If somebody can give me a cost-effective 8-Tbit/s pipe, I can fill it up," he said, referring to 80 wavelengths of 100 Gbit/s.)

— Craig Matsumoto, West Coast Editor, Light Reading

Pete Baldwin 12/5/2012 | 4:35:24 PM
re: Google: 100 Gbit/s? How 'Bout 8 TBs Per Second?

Headline aside, what was interesting about this talk was the detail about different network options.  Koley talks really fast - it was a lot like being back in school, trying to keep up.  He started out with the obligatory big numbers, showing how much data Google shovels around and how quickly, then got into some really interesting network details.  Good talk.

HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE