& cplSiteName &

The 400Gbit/s Demonstrator

Eduard Beier
8/21/2013

A collaboration between research institutions and industrial partners demonstrated at the ISC'13 that 400Gbit/s bandwidth granularity is not only feasible, but already useful today.

For demonstration purposes a number of innovative technologies such as 400Gbit/s DWDM Super Channel, high-speed flash memory and a distributed parallel file system are used in combination.

The project is set up to have real data and realistic applications instead of yet another "hero experiment" with test generators and analyzers. Initial performance tests were performed to check the operational condition of all components working together in the demonstrator. Synthetic data and load was put on the connectivity and IT components and showed overall good operational conditions.

Then two applications were activated on the demonstrator:

  • Climate Research with centralized computing accessing to distributed data

  • Turbine Development with multi-stage processing dynamically both shifting big-data between Munich and Dresden (620km standard fiber) over a single 400Gbit/s wavelength Super Channel.

    From the research community are involved: the German Centre for Aerospace (DLR); the Leibniz Supercomputing Centre (LRZ); the Max Planck Institute for Meteorology (MPI-M) from the Max Planck Society and the computing center of the Max Planck Society in Garching (RZG); the computer center of the Technical University of Freiberg (RZ TUBAF); and the Center for Information Services and High Performance Computing Dresden University of Technology (ZIH).

    Industrial partners are: Alcatel-Lucent (NYSE: ALU), Barracuda Networks Inc. , Bull SA , Cluster Vision, EMC2, IBM Corp. (NYSE: IBM), Mellanox Technologies Ltd. (Nasdaq: MLNX), Deutsche Telekom AG (NYSE: DT), and T-Systems International GmbH .

    The fundamental network structure of the Demonstrator is shown by the following picture:

    Network Structure

    The compute clusters on both sides are clients of the distributed parallel file system which consists of 12 servers with three flash memory boards in each server. Both clusters are able to read and write at 400 Gbit/s on that file system.

    Standard Sandy Bridge servers with at least five PCIe3.0x8 slots and 128GB DRAM are used. The design goal was to create about 18 GBytes of sustained duplex data rate per node (Ethernet: 5GBytes, Memory: 6 GBytes, Fabric: 7 GBytes). The Fabric data rate has not yet been tested, the other rates have been confirmed by tests.

    For performance margin reasons the setup is moderately overbooked between memory and network. Below are the total theoretical performance numbers.

    Theoretical Throughput

    The predecessor project in 2010 (100Gbit/s Testbed Dresden Freiberg) required arrays of about 800 spinning disks on each side to form a 100Gbit/s data path. Because that setup was, for scaling reasons, not possible for 400 Gbit/s, the 2 GBytes I/O throughput flash memory board is one of the enablers of this 400Gbit/s project.

    For the ISC configuration (in the picture below), the following systems were added: a 10GbE Cluster in Dresden; a 2x100GbE link to Freiberg (not connected during ISC); in Garching the SuperMUC at LRZ (one of the fastest supercomputers in Europe) and a 480 GBytes cluster (> 2500 cores) at RZG; and a commercial Cloud Service from T-Systems (not connected during ISC).

    The Big Picture

    The project links to SDN and NFV -- in particular as it is very active in NSI (Network-to-Service-Interface) definition. (For further information, see the link below to additional project information).

    Because the "distributed high speed GPFS" approach is of some universal nature (e.g. for HPC datacenter backup and HPC workload distribution reasons) the setup will be tested for commercial applicability during the post ISC phase. The possibility to use network functions like encryption, firewalling and data compression is definitely a must in a commercial case.

    Network appliances for 40 Gbit/s and 100 Gbit/s are neither available nor would they be affordable in many cases. Therefore we are going to test virtualized network functions on standard server hardware (see picture below); that additional "module," which is based on the same server hardware as the other servers, gets in between each server and the router on each side.

    NFV Module

    Please feel free to ask questions about any technical aspect of the project. The project team, which consists of some of the best specialists on their individual field, will be happy to answer. For additional project information: http://tu-dresden.de/zih/400gigabit.

    The Partners

    — Eduard Beier, process systems engineer, T-Systems International

    (6)  | 
    Comment  | 
    Print  | 
  • Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
    RolfSperber
    RolfSperber
    9/26/2013 | 10:11:00 AM
    Re: SDN and NFV
    Ray, we are at a very early stage, but abstracting from both hardware (and in consequence IOS) layer and utilizing a common framework (see NSI WG in OGF) and at the same time allow for docking of virtalized network functions will work for multi vendor, multi carrier and multi domain. Still, its a long way to go!
    Ray@LR
    [email protected]
    9/26/2013 | 7:51:13 AM
    Re: SDN and NFV
    Rolf

    You say it is not restricted to a single domain, but is it applicable in networks that traverse multiple infrastructures run and managed by multiple network operators?
    RolfSperber
    RolfSperber
    9/25/2013 | 8:43:49 AM
    Re: Is this another route in for NFV?
    Requirements in industry will not be so different from those in R&D. Looking at the plans in the context of Horizon 2020 Puplic Private Partnership is a target of European efforts. Taking into account the cost of infrastructure and the prevailing attitude to pay as little as possible for utilization of infrastructure, more sophisticated multiplexing methods, and this include NFV, are inevitable.

    For network operators this scheme means significantly reduced time to market.
    RolfSperber
    RolfSperber
    9/25/2013 | 8:36:55 AM
    SDN and NFV
    In this project we will be going a step further. Our plan is to create an environment that allows for creating a virtual network based on the requirements of either applications or  carrier provided network functionality. We will not be restricted to a single domain and connectivity, our target is a network created from building blocks out of a repository. These building blocks can be connectivity with certain quality parameters or virtualized network functions like e.g. firwall functionality, compression, encryption, accelleration.

     

     
    Eddie_
    Eddie_
    8/21/2013 | 3:00:47 PM
    Re: Is this another route in for NFV?
    NFV simply scales better than HW based approaches (if NFV tests in September show good results).

    A possible 2013 roadmap for the project:
    • fully SDN controlled network
    • 200GBit/s datapath
    • NFV in the data path 

    a possible 2014 roadmap for the project:
    • scale up to a 1TBit/s data path

    how else could you do that?

     

     

     

     

     
    Ray@LR
    [email protected]
    8/21/2013 | 1:01:52 PM
    Is this another route in for NFV?
    Interesting that functions virtualization takes the place of appliances that are either too expensive to deploy or have not yet been created... is NFV going to help the R&D sector more than production network operatins in the early years?
    More Blogs from Column
    It's like Mad Max in the optical networking space, with every group of participants – optical transceiver vendors, chip manufacturers, systems OEMs and even end customers – all fighting their own war.
    An analyst firm is at odds with industry execs on how quickly the market for LiDAR applications will take off. Several companies that supply the telco industry are making bets that LiDAR will pay off soon.
    A new study from BearingPoint shows that CSPs have a lot of work ahead of them if they are to appeal to enterprise customers.
    The optical networking industry has seen its fair share of customers show up to the party and then leave without warning. One analyst ponders what's going to be different in the next 12 months.
    NFV has many naysayers, but it's alive, kicking and thriving, with SD-WAN as a significant catalyst.
    Featured Video
    Upcoming Live Events
    October 22, 2019, Los Angeles, CA
    November 5, 2019, London, England
    November 7, 2019, London, UK
    November 14, 2019, Maritim Hotel, Berlin
    December 3-5, 2019, Vienna, Austria
    December 3, 2019, New York, New York
    March 16-18, 2020, Embassy Suites, Denver, Colorado
    May 18-20, 2020, Irving Convention Center, Dallas, TX
    All Upcoming Live Events