x
400G/Terabit

Data Center Drives 400G

Hyperscale and large data centers will need 400G connections to support rapidly growing server input/output (I/O) as the combination of higher server density and a shift to 25Gbit/s drives server I/O bandwidth per rack to 5 Tbit/s. Cloud service providers are already planning 400G deployments and expect widespread use of 400G within data centers by 2018. Many component vendors have introduced first-generation 400G components and are working on optimized solutions for introduction during 2017 and 2018.

Cloud services accessed by individuals and businesses from mobile devices and fixed-line connections are driving a huge growth in global IP traffic. These services include online storage, social media, big data processing, Internet of Things (IoT) services and audio/video content delivery. Many of these services are hosted on hyperscale data centers operated by leading service providers, including Facebook, Google and Microsoft. The shift toward virtualization using software-defined networking (SDN) and network functions virtualization (NFV) is also driving growth in East-West traffic between servers.

The cost of 100G connections has come down significantly during the last two years as volume has grown and component vendors have developed cost-effective solutions, including 25Gbit/s lasers and serial interfaces. These developments have also created a new opportunity for service providers to upgrade network interfaces on servers from 10 Gbit/s to 25 Gbit/s. Vendors are now introducing the first solutions based on 50Gbit/s lanes using 25Gbit/s lasers and serial interfaces with PAM4 modulation or 50Gbit/s lasers and serial interfaces with NRZ. Several vendors are already working on single lambda 100G solutions.

Key specifications for 400G are becoming clear as the IEEE and other organizations move forward with standards for 400G. Optical port types for 100m, 500m, 2km and 10km have been agreed by the IEEE P802.3bs 400Gbit/s Ethernet Task Force. There are also several industry groups developing 400G optical module form factors. Data center interconnects (DCI) between data centers are point-to-point links from a few kilometers to several thousand kilometers. Many use flexi-rate coherent connections supporting 100 Gbit/s to 400 Gbit/s over different distances depending on the modulation used. For short DCI links less than 80km, direct detect solutions with PAM4 modulation can also be used.

Heavy Reading's new report, 400G Components Come Out of the Shadows, identifies key 400G technology and component developments, reviewing vendors and the latest product introductions and announcements. The report profiles 19 vendors with 400G components for connections within the data center and between data centers. The report includes not only information on the vendors and components but also insights into how the overall 400G market and ecosystem is developing.

Physical layer devices supporting 400G applications have been announced by Applied Micro, Broadcom, Credo, GigPeak, Inphi, Macom and MultiPhy. Many of these devices support PAM4 modulation using DSP or analog implementations. MaxLinear and several of the other companies have introduced trans-impedance amplifiers (TIAs) and laser drivers. Finisar, Molex, NeoPhotonics and TE Connectivity have developed optical modules or active optical cables for 400G applications within data centers and enterprise networks. MoSys has introduced a programmable search engine for applications up to 800 Gbit/s, and Xilinx and Altera have, or are developing, multiple FPGA devices and intellectual property that can be used to implement 400G connections within the data center.

400G flexi-rate coherent optical modules are available from Acacia and ClariPhy is developing a third-generation DSP-based coherent transceiver device for 400G DCI and metro applications. Fujitsu Optical Components and NeoPhotonics have demonstrated optical components for 400G DCI, Metro and long-haul applications. Microsemi is shipping a 400G multiservice processor and both Altera and Xilinx have multiple FPGA devices and intellectual property for DCI and metro applications.

The rapidly growing I/O bandwidth to servers is forcing service providers to look at 400G connections within and between data centers. Now much of the development work for 100G is complete component vendors are introducing solutions for 400G. The initial 400G deployments are using these first-generation components, and all of the vendors are developing second-generation devices ready for widespread deployment in 2018 and 2019.

— Simon Stanley, Analyst at Large, Heavy Reading

Be the first to post a comment regarding this story.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE