400Gbit/s bandwidth granularity is not only feasible, but already useful today.

Eduard Beier

August 21, 2013

4 Min Read
The 400Gbit/s Demonstrator

A collaboration between research institutions and industrial partners demonstrated at the ISC'13 that 400Gbit/s bandwidth granularity is not only feasible, but already useful today.

For demonstration purposes a number of innovative technologies such as 400Gbit/s DWDM Super Channel, high-speed flash memory and a distributed parallel file system are used in combination.

The project is set up to have real data and realistic applications instead of yet another "hero experiment" with test generators and analyzers. Initial performance tests were performed to check the operational condition of all components working together in the demonstrator. Synthetic data and load was put on the connectivity and IT components and showed overall good operational conditions.

Then two applications were activated on the demonstrator:

  • Climate Research with centralized computing accessing to distributed data

    • Turbine Development with multi-stage processing dynamically both shifting big-data between Munich and Dresden (620km standard fiber) over a single 400Gbit/s wavelength Super Channel.

      From the research community are involved: the German Centre for Aerospace (DLR); the Leibniz Supercomputing Centre (LRZ); the Max Planck Institute for Meteorology (MPI-M) from the Max Planck Society and the computing center of the Max Planck Society in Garching (RZG); the computer center of the Technical University of Freiberg (RZ TUBAF); and the Center for Information Services and High Performance Computing Dresden University of Technology (ZIH).

      Industrial partners are: Alcatel-Lucent (NYSE: ALU), Barracuda Networks Inc. , Bull SA , Cluster Vision, EMC2, IBM Corp. (NYSE: IBM), Mellanox Technologies Ltd. (Nasdaq: MLNX), Deutsche Telekom AG (NYSE: DT), and T-Systems International GmbH .

      The fundamental network structure of the Demonstrator is shown by the following picture:

      Figure 1: Network Structure

      The compute clusters on both sides are clients of the distributed parallel file system which consists of 12 servers with three flash memory boards in each server. Both clusters are able to read and write at 400 Gbit/s on that file system.

      Standard Sandy Bridge servers with at least five PCIe3.0x8 slots and 128GB DRAM are used. The design goal was to create about 18 GBytes of sustained duplex data rate per node (Ethernet: 5GBytes, Memory: 6 GBytes, Fabric: 7 GBytes). The Fabric data rate has not yet been tested, the other rates have been confirmed by tests.

      For performance margin reasons the setup is moderately overbooked between memory and network. Below are the total theoretical performance numbers.

      Figure 2: Theoretical Throughput

      The predecessor project in 2010 (100Gbit/s Testbed Dresden Freiberg) required arrays of about 800 spinning disks on each side to form a 100Gbit/s data path. Because that setup was, for scaling reasons, not possible for 400 Gbit/s, the 2 GBytes I/O throughput flash memory board is one of the enablers of this 400Gbit/s project.

      For the ISC configuration (in the picture below), the following systems were added: a 10GbE Cluster in Dresden; a 2x100GbE link to Freiberg (not connected during ISC); in Garching the SuperMUC at LRZ (one of the fastest supercomputers in Europe) and a 480 GBytes cluster (> 2500 cores) at RZG; and a commercial Cloud Service from T-Systems (not connected during ISC).

      Figure 3: The Big Picture

      The project links to SDN and NFV -- in particular as it is very active in NSI (Network-to-Service-Interface) definition. (For further information, see the link below to additional project information).

      Because the "distributed high speed GPFS" approach is of some universal nature (e.g. for HPC datacenter backup and HPC workload distribution reasons) the setup will be tested for commercial applicability during the post ISC phase. The possibility to use network functions like encryption, firewalling and data compression is definitely a must in a commercial case.

      Network appliances for 40 Gbit/s and 100 Gbit/s are neither available nor would they be affordable in many cases. Therefore we are going to test virtualized network functions on standard server hardware (see picture below); that additional "module," which is based on the same server hardware as the other servers, gets in between each server and the router on each side.

      Figure 4: NFV Module

      Please feel free to ask questions about any technical aspect of the project. The project team, which consists of some of the best specialists on their individual field, will be happy to answer. For additional project information: http://tu-dresden.de/zih/400gigabit.

      Figure 5: The Partners

      — Eduard Beier, process systems engineer, T-Systems International

Read more about:

Europe

About the Author(s)

Eduard Beier

Eduard Beier is an experienced communications networking engineer at T-Systems International, where he works as part of the Solution Design team with a focus on Network & Communication Voice & Security. He has worked at T-Systems, part of Deutsche Telekom, since 1995, working on projects as diverse as Dante's GEANT, CERN and the 100G Testbed Dresden – Freiberg (2010).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like