ANAHEIM, Calif. -- OSA Executive Forum/OFC 2016 -- Just as the data center interconnect (DCI) system market was taking off, one of the biggest potential buyers of dedicated DCI systems boxes, Microsoft, just wiped out a whole chunk of the market courtesy of an alternative R&D collaboration with optical components vendor Inphi Corp.
Speaking here at the OSA Executive Forum Monday afternoon, Tom Issenhuth, Optical Network Architect, Azure Networking, at Microsoft Corp. (Nasdaq: MSFT), laid out the hyperscale data center operator's DCI technology needs. All well and good -- the audience of about 250 attendees were suitably rapt.
But then he dropped the bombshell: For Microsoft's "sweet spot" of less than 80km inter-data center links, it has worked directly with Inphi to develop a new module that plugs directly into data center switches, obviating the need to deploy a dedicated DCI box.
The product developed by Inphi and as yet unidentified partners is a 100Gbit/s QSFP28 DWDM module that slots straight into data center switches from the likes of Arista Networks Inc. and Cisco Systems Inc. (Nasdaq: CSCO). Inphi and Microsoft are going to demonstrate the product and how it changes the DCI landscape during the OFC show here in Anaheim -- more details are set to become available Tuesday.
But the immediate implications of Microsoft's move are substantial.
For Microsoft, it cuts costs, from a capex and opex perspective: One less box in the chain means less power, space and fewer components.
For Inphi, a company that generated revenues of $246.6 million in 2015, the news that it is going to be delivering a new product to an enormous customer, starting during the second half of this year, looks set to move the needle on its share price, which closed Monday down 0.1% at $29.17, and its revenues.
And of course Microsoft might not be the only company that wants this product -- it is not exclusive to Microsoft. Light Reading asked Issenhuth if Microsoft had held discussions with the likes of Google (Nasdaq: GOOG) and Facebook about the development: He said they chat about general market developments but that this particular move was one it was leading on its own. But "if it works for us, it should work for them too."
And then, of course, there's the impact on the systems vendors such as ADVA Optical Networking , Ciena Corp. (NYSE: CIEN), Coriant , Fujitsu Network Communications Inc. , Infinera Corp. (Nasdaq: INFN) and Juniper Networks Inc. (NYSE: JNPR) (courtesy of its acquisition of BTI Systems), that have been developing and building dedicated DCI boxes to sell to the likes of Microsoft as well as the telcos. (See Juniper Flies Into DCI With BTI Acquisition.)
"This has potentially huge implications for the purpose-built DCI box vendors," stated Heavy Reading senior analyst Sterling Perrin. "They have built their products primarily for the Webscale companies, but Microsoft is essentially saying they don't need them, at least over these distances [80km or less]. And this could mushroom to Google and Facebook -- they could do things differently of course but even if this was just Microsoft, that's still a big hit to the DCI box market, which is still in its infancy. This looks like a big blow," added Perrin.
Look out for more details on this development on Tuesday.
— Ray Le Maistre,
, Editor-in-Chief, Light Reading
How many great companies were reduced to a generic chip set, some lines of code and soon forgotten?
If I understand it correctly, this is a racking issue, there will still be control, monitoring and management that have to occur. Which may be reduced to a simple "card present" and the "power is on", since data centers have 7x24 staff that can simply walk over and check the card or pop a new one in and see what happens. CRC ARQ is probably not occuring in the optical modue.
As stated below, the second question is how far are data centers from the Network they are connected to? I'm guessing not very far, like just off the highway or railroad or pipeline right of way with the burried fibre. For large enough data centers, it seems entirely likely that the long haul provider would be more than happy to drop IP routers and DWDM gear they manage into the bunker as part of the deal. Or that Data Centers are in the same Industrial Parks as the Long Haul Carrier. There may be a second part of the deal involving long haul providers that we haven't heard about yet.
It might be interesting to look at how much of the server market is rented space or rented servers vs companies running their own bunker with their own servers?
Who can forget the C48EF and 801 or the AT Hayes Command Set.
What about MBI or Wang word processors?