ANAHEIM, Calif. -- OSA Executive Forum -- Google wants faster, cheaper data center components and it wants them yesterday. According to the executive in charge of architecting the company's data centers, the industry isn't serving up innovation anywhere near fast enough.
While other operators of hyper datacenters are considering jumping directly from 100G to 400G in their data centers, Google (Nasdaq: GOOG) has already decided to take the intermediate step to 200G because 400G technology won't meet Google's cost goals soon enough, according to Hong Liu, senior principal engineer of Google Technical Infrastructure.
There's no way to tell if Google is resigned to building its strategy around 200G, or if it's a subject of outright irritation, but it was clear Liu wasn't happy about the current pace of innovation, whether for 200G or 400G.
So Google is challenging the optical community to optimize for high bandwidth, good linearity and the ability for WDM integration, she told the audience at the OSA Executive Forum, which is being held in conjunction with OFC here this week.
Liu's specific challenges focused on the basics: lasers. The practical options are DFBs, VCSELs and silicon photonics, and while each has advantages, they all also have drawbacks.
DFB lasers have long reach to about 10km, but cost more, in part because of the packaging, and have a high driving voltage. VCSEL has lower cost and low power consumption, but also lower bandwidth, and a reach of only about 100m. But with VCSEL, you can do non-hermetic packaging, which is easier.
Silicon photonics is high bandwidth, has a 2km reach, promises low cost, has easier packaging and you can stack it on top of CMOS. Where it differs from CMOS is that it is by no means as easy to produce; if you don't control the manufacturing process carefully, the chips just don't work, she said. Furthermore, they suck a lot of power.
Google wants a device that combines the best characteristics of the three, Liu said: a DFB laser in an inexpensive package with low power usage and long reach.
Data centers are developing at an accelerating rate, she observed. The 10G era lasted seven years; the industry got to the end of 40G in less than four years; and the 100G generation the industry is moving to right now is likely to endure for less than three years, "and then we will have to move on," she said.
And lest anyone think of taking Google's demands lightly, Liu reminded the audience that Google was right the last time it challenged the industry.
"Listen to Google. We asked for QSFP28 100G CWDM five years ago and we met with resistance."
Liu's prepared remarks focused on what Google wants from the industry: The 200G strategy cropped up during the Q&A session that followed, when she was asked about whether Google expected to go from 100G to 200G or 400G.
The technology has to be there first, and it isn't there for 400G, she said, invoking history. Some people still think that the pit stop at 40G wasn't necessary, but in Google's estimation, it was.
"People were saying 'Let's do 100G,' but the electrical I/O wasn't ready. Doing 4x25 was expensive. So we added 40G. There's no significant advantage of going to 400G instead of 200G; it's too expensive. Also, to do 4x100, the ICs and components aren't there. It's not as efficient as going to 200G."
She also rejected 8x50 and 16x25 as impractical.
The thing to worry about, she noted, was that an entirely new package might be necessary to do a 2x200G implementation to get to 400G.
— Brian Santo, Senior Editor, Components, T&M, Light Reading