x
Data Center Infrastructure

Rules Change for Hyperscale Data Centers

Inside hyperscale data centers there is an insatiable appetite for the fastest equipment and connections possible, which means literal forklift updates every three years. It also means ordering equipment in such quantities that common market dynamics no longer apply. Unconventional technologies, flexible Ethernet and on-board optics for example, become attractive.

Oh, and there seems to be little interest in white boxes at the hyperscale level.

Ever try to get a tour of a hyperscale data center? It's easier to get a prom date with Mila Kunis. So it's a big deal to have someone who runs several score of those things on hand to explain how they work. There were two speakers who fit that description at Light Reading's "Next-Generation Network Components" event last week: Brad Booth, principal engineer at Microsoft Corp. (Nasdaq: MSFT) Azure Networking and Ihab Tarazi, CTO of Equinix Inc. (Nasdaq: EQIX).

Microsoft Azure currently runs about 100 data centers globally, equipped with over 1.4 million servers (and counting), mostly running 10G and 40G, looking to ramp to 25G and 100G, Booth said.

Cloud computing is growing at such eye-popping rates that Microsoft Azure will install the fastest equipment and connectivity it can get the moment it can get it, Booth explained.

"We're planning to go 50G to each server. Our core will be 100G. That leaves little difference between data center and core. That's why we're looking at 400G," he said.

He noted that the IEEE committee developing the standard for 400G recently announced a ten-month slip in its schedule, so that the standard is now due in December of 2017.

"People ask me, are you interested in 400G? Yeah, I'd buy it today. 1.6 terabits? I'd buy it today."

Ordinarily a vendor brings a product to the market and sales are slow at first as first adopters test it out. If successful, the sales chart will get the classic hockey stick appearance -- relatively flat and then turning sharply upward.


Want to know more about hyperscale data centers? Check out our dedicated data center infrastructure page here on Light Reading.


When Microsoft Azure makes a major upgrade, it happens immediately. "We walk in and say we need tens of thousands of these things this week. It changes the economics," Booth noted. "We're at the hockey stick on day one."

Hyperscale data centers are huge, measuring in multiples of sports arenas, so finding space for another rack is not an issue. The problem is the lack of enough real estate on server faceplates. Vendors are working feverishly to make interconnect smaller and denser, but it's a tough slog, and they still need to leave space for airflow vents.

Microsoft is a member of the Consortium for On-Board Optics (COBO), along with Broadcom, Juniper, Cisco, Finisar, Intel and others. The idea is to just move the interconnect, which also shortens the distance between connections, which means cheaper copper cable remains practical.

With standard interconnect, if there's a problem with interconnect, you just swap out the connector. With OBO, though, if something goes wrong, the problem is inaccessible.

Booth is undeterred. He has a forklift and no fear of moving entire racks when necessary.

He's also an advocate of flexible Ethernet.

To get to 40G, it is possible to combine four 10Gs. But there still isn't support for freely mixing and matching any connection you want to get to any increment you want. If you have a link supporting 150G, you don't have the option of adding a 100G and a 50G.

"I don't want to be constrained to 100G," Booth said. "You go 80 kilometers, 100 kilometers? That gets expensive. If I can get 1G more out of them, it's worth it."

Equinix specialises in interconnect and colocation. The company has 105 data centers, and it intends to have 150. Tarazi boasted of over 1,000 peering agreements, including the most connections with the biggest hyperscale data center companies.

"We connect with AWS, Azure, Cisco, IBM. You can send 80% of your traffic on one connection." The approach is new in the last 18 months, he explained.

The benefits? "We improve performance," Tarazi said. Customers build part of their networks inside Equinix data centers to establish Internet connectivity without ever traversing the public Internet. He said the approach can reduce a customer opex by 25%. Latency reduction can be up to 40%. "That's enormous," he said. "Now video conferencing works."

The company is planning to have 12 to 20 switches in every data center. In order to scale up, the company went fully SDN, using Tail-f (now part of Cisco).

"My personal perspective? This cannot be done without SDN," Tarazi said.

SDN and NFV. "With Azure, the cloud automatically activates capacity with no human touching it. NFV is activating capacity on demand," he noted.

Asked about white boxes, he said, "We haven't gone into white boxes yet. Going to Juniper or Cisco, they can give us better chips. We don't need to go to white boxes."

— Brian Santo, Senior Editor, Components, T&M, Light Reading

Be the first to post a comment regarding this story.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE