& cplSiteName &

The 400Gbit/s Demonstrator

Eduard Beier
8/21/2013
100%
0%

A collaboration between research institutions and industrial partners demonstrated at the ISC'13 that 400Gbit/s bandwidth granularity is not only feasible, but already useful today.

For demonstration purposes a number of innovative technologies such as 400Gbit/s DWDM Super Channel, high-speed flash memory and a distributed parallel file system are used in combination.

The project is set up to have real data and realistic applications instead of yet another "hero experiment" with test generators and analyzers. Initial performance tests were performed to check the operational condition of all components working together in the demonstrator. Synthetic data and load was put on the connectivity and IT components and showed overall good operational conditions.

Then two applications were activated on the demonstrator:

  • Climate Research with centralized computing accessing to distributed data

  • Turbine Development with multi-stage processing dynamically both shifting big-data between Munich and Dresden (620km standard fiber) over a single 400Gbit/s wavelength Super Channel.

    From the research community are involved: the German Centre for Aerospace (DLR); the Leibniz Supercomputing Centre (LRZ); the Max Planck Institute for Meteorology (MPI-M) from the Max Planck Society and the computing center of the Max Planck Society in Garching (RZG); the computer center of the Technical University of Freiberg (RZ TUBAF); and the Center for Information Services and High Performance Computing Dresden University of Technology (ZIH).

    Industrial partners are: Alcatel-Lucent (NYSE: ALU), Barracuda Networks Inc. , Bull SA , Cluster Vision, EMC2, IBM Corp. (NYSE: IBM), Mellanox Technologies Ltd. (Nasdaq: MLNX), Deutsche Telekom AG (NYSE: DT), and T-Systems International GmbH .

    The fundamental network structure of the Demonstrator is shown by the following picture:

    Network Structure

    The compute clusters on both sides are clients of the distributed parallel file system which consists of 12 servers with three flash memory boards in each server. Both clusters are able to read and write at 400 Gbit/s on that file system.

    Standard Sandy Bridge servers with at least five PCIe3.0x8 slots and 128GB DRAM are used. The design goal was to create about 18 GBytes of sustained duplex data rate per node (Ethernet: 5GBytes, Memory: 6 GBytes, Fabric: 7 GBytes). The Fabric data rate has not yet been tested, the other rates have been confirmed by tests.

    For performance margin reasons the setup is moderately overbooked between memory and network. Below are the total theoretical performance numbers.

    Theoretical Throughput

    The predecessor project in 2010 (100Gbit/s Testbed Dresden Freiberg) required arrays of about 800 spinning disks on each side to form a 100Gbit/s data path. Because that setup was, for scaling reasons, not possible for 400 Gbit/s, the 2 GBytes I/O throughput flash memory board is one of the enablers of this 400Gbit/s project.

    For the ISC configuration (in the picture below), the following systems were added: a 10GbE Cluster in Dresden; a 2x100GbE link to Freiberg (not connected during ISC); in Garching the SuperMUC at LRZ (one of the fastest supercomputers in Europe) and a 480 GBytes cluster (> 2500 cores) at RZG; and a commercial Cloud Service from T-Systems (not connected during ISC).

    The Big Picture

    The project links to SDN and NFV -- in particular as it is very active in NSI (Network-to-Service-Interface) definition. (For further information, see the link below to additional project information).

    Because the "distributed high speed GPFS" approach is of some universal nature (e.g. for HPC datacenter backup and HPC workload distribution reasons) the setup will be tested for commercial applicability during the post ISC phase. The possibility to use network functions like encryption, firewalling and data compression is definitely a must in a commercial case.

    Network appliances for 40 Gbit/s and 100 Gbit/s are neither available nor would they be affordable in many cases. Therefore we are going to test virtualized network functions on standard server hardware (see picture below); that additional "module," which is based on the same server hardware as the other servers, gets in between each server and the router on each side.

    NFV Module

    Please feel free to ask questions about any technical aspect of the project. The project team, which consists of some of the best specialists on their individual field, will be happy to answer. For additional project information: http://tu-dresden.de/zih/400gigabit.

    The Partners

    — Eduard Beier, process systems engineer, T-Systems International

    (6)  | 
    Comment  | 
    Print  | 
  • Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
    RolfSperber
    50%
    50%
    RolfSperber,
    User Rank: Light Beer
    9/26/2013 | 10:11:00 AM
    Re: SDN and NFV
    Ray, we are at a very early stage, but abstracting from both hardware (and in consequence IOS) layer and utilizing a common framework (see NSI WG in OGF) and at the same time allow for docking of virtalized network functions will work for multi vendor, multi carrier and multi domain. Still, its a long way to go!
    Ray@LR
    50%
    50%
    Ray@LR,
    User Rank: Blogger
    9/26/2013 | 7:51:13 AM
    Re: SDN and NFV
    Rolf

    You say it is not restricted to a single domain, but is it applicable in networks that traverse multiple infrastructures run and managed by multiple network operators?
    RolfSperber
    100%
    0%
    RolfSperber,
    User Rank: Light Beer
    9/25/2013 | 8:43:49 AM
    Re: Is this another route in for NFV?
    Requirements in industry will not be so different from those in R&D. Looking at the plans in the context of Horizon 2020 Puplic Private Partnership is a target of European efforts. Taking into account the cost of infrastructure and the prevailing attitude to pay as little as possible for utilization of infrastructure, more sophisticated multiplexing methods, and this include NFV, are inevitable.

    For network operators this scheme means significantly reduced time to market.
    RolfSperber
    100%
    0%
    RolfSperber,
    User Rank: Light Beer
    9/25/2013 | 8:36:55 AM
    SDN and NFV
    In this project we will be going a step further. Our plan is to create an environment that allows for creating a virtual network based on the requirements of either applications or  carrier provided network functionality. We will not be restricted to a single domain and connectivity, our target is a network created from building blocks out of a repository. These building blocks can be connectivity with certain quality parameters or virtualized network functions like e.g. firwall functionality, compression, encryption, accelleration.

     

     
    Eddie_
    50%
    50%
    Eddie_,
    User Rank: Blogger
    8/21/2013 | 3:00:47 PM
    Re: Is this another route in for NFV?
    NFV simply scales better than HW based approaches (if NFV tests in September show good results).

    A possible 2013 roadmap for the project:
    • fully SDN controlled network
    • 200GBit/s datapath
    • NFV in the data path 

    a possible 2014 roadmap for the project:
    • scale up to a 1TBit/s data path

    how else could you do that?

     

     

     

     

     
    Ray@LR
    50%
    50%
    Ray@LR,
    User Rank: Blogger
    8/21/2013 | 1:01:52 PM
    Is this another route in for NFV?
    Interesting that functions virtualization takes the place of appliances that are either too expensive to deploy or have not yet been created... is NFV going to help the R&D sector more than production network operatins in the early years?
    More Blogs from Column
    How pay-TV operators can unleash the power of AI and deep learning to compete in the rapidly changing video market.
    Accurate real-time visibility into next-generation networks and clouds is essential for operators' digital transformation and 5G strategies.
    Intent-based – or outcome-driven – networking attempts to automatically provision networks based on our ability to define our intent or outcome, but is IBN the right approach to advancing network automation?
    Managing the multicloud environment means addressing security challenges such as protecting access, ensuring data encryption and achieving consistency with data security across diverse cloud platforms.
    Here are some predictions about what communication service providers face in the coming year.
    Featured Video
    From The Founder
    The world of virtualization is struggling to wrench itself away from the claws of vendor lock-in, which runs counter to everything that NFV stands for.
    Flash Poll
    Upcoming Live Events
    March 20-22, 2018, Denver Marriott Tech Center
    March 22, 2018, Denver, Colorado | Denver Marriott Tech Center
    March 28, 2018, Kansas City Convention Center
    April 4, 2018, The Westin Dallas Downtown, Dallas
    April 9, 2018, Las Vegas Convention Center
    May 14-16, 2018, Austin Convention Center
    May 14, 2018, Brazos Hall, Austin, Texas
    September 24-26, 2018, Westin Westminster, Denver
    October 9, 2018, The Westin Times Square, New York
    October 23, 2018, Georgia World Congress Centre, Atlanta, GA
    November 8, 2018, The Montcalm by Marble Arch, London
    November 15, 2018, The Westin Times Square, New York
    December 4-6, 2018, Lisbon, Portugal
    All Upcoming Live Events
    Hot Topics
    Has Europe Switched to a Fiber Diet? Not Yet...
    Ray Le Maistre, Editor-in-Chief, 2/15/2018
    Will China React to Latest US Huawei, ZTE Slapdown?
    Ray Le Maistre, Editor-in-Chief, 2/16/2018
    5G: The Density Question
    Dan Jones, Mobile Editor, 2/15/2018
    IBM, Microsoft Duke It Out Over Chief Diversity Hire
    Sarah Thomas, Director, Women in Comms, 2/15/2018
    21st Century Networking? Welcome to the Lock-In
    Steve Saunders, Founder, Light Reading, 2/20/2018
    Animals with Phones
    Live Digital Audio

    A CSP's digital transformation involves so much more than technology. Crucial – and often most challenging – is the cultural transformation that goes along with it. As Sigma's Chief Technology Officer, Catherine Michel has extensive experience with technology as she leads the company's entire product portfolio and strategy. But she's also no stranger to merging technology and culture, having taken a company — Tribold — from inception to acquisition (by Sigma in 2013), and she continues to advise service providers on how to drive their own transformations. This impressive female leader and vocal advocate for other women in the industry will join Women in Comms for a live radio show to discuss all things digital transformation, including the cultural transformation that goes along with it.

    Like Us on Facebook
    Twitter Feed