& cplSiteName &

The 400Gbit/s Demonstrator

Eduard Beier
8/21/2013
100%
0%

A collaboration between research institutions and industrial partners demonstrated at the ISC'13 that 400Gbit/s bandwidth granularity is not only feasible, but already useful today.

For demonstration purposes a number of innovative technologies such as 400Gbit/s DWDM Super Channel, high-speed flash memory and a distributed parallel file system are used in combination.

The project is set up to have real data and realistic applications instead of yet another "hero experiment" with test generators and analyzers. Initial performance tests were performed to check the operational condition of all components working together in the demonstrator. Synthetic data and load was put on the connectivity and IT components and showed overall good operational conditions.

Then two applications were activated on the demonstrator:

  • Climate Research with centralized computing accessing to distributed data

  • Turbine Development with multi-stage processing dynamically both shifting big-data between Munich and Dresden (620km standard fiber) over a single 400Gbit/s wavelength Super Channel.

    From the research community are involved: the German Centre for Aerospace (DLR); the Leibniz Supercomputing Centre (LRZ); the Max Planck Institute for Meteorology (MPI-M) from the Max Planck Society and the computing center of the Max Planck Society in Garching (RZG); the computer center of the Technical University of Freiberg (RZ TUBAF); and the Center for Information Services and High Performance Computing Dresden University of Technology (ZIH).

    Industrial partners are: Alcatel-Lucent (NYSE: ALU), Barracuda Networks Inc. , Bull SA , Cluster Vision, EMC2, IBM Corp. (NYSE: IBM), Mellanox Technologies Ltd. (Nasdaq: MLNX), Deutsche Telekom AG (NYSE: DT), and T-Systems International GmbH .

    The fundamental network structure of the Demonstrator is shown by the following picture:

    Network Structure

    The compute clusters on both sides are clients of the distributed parallel file system which consists of 12 servers with three flash memory boards in each server. Both clusters are able to read and write at 400 Gbit/s on that file system.

    Standard Sandy Bridge servers with at least five PCIe3.0x8 slots and 128GB DRAM are used. The design goal was to create about 18 GBytes of sustained duplex data rate per node (Ethernet: 5GBytes, Memory: 6 GBytes, Fabric: 7 GBytes). The Fabric data rate has not yet been tested, the other rates have been confirmed by tests.

    For performance margin reasons the setup is moderately overbooked between memory and network. Below are the total theoretical performance numbers.

    Theoretical Throughput

    The predecessor project in 2010 (100Gbit/s Testbed Dresden Freiberg) required arrays of about 800 spinning disks on each side to form a 100Gbit/s data path. Because that setup was, for scaling reasons, not possible for 400 Gbit/s, the 2 GBytes I/O throughput flash memory board is one of the enablers of this 400Gbit/s project.

    For the ISC configuration (in the picture below), the following systems were added: a 10GbE Cluster in Dresden; a 2x100GbE link to Freiberg (not connected during ISC); in Garching the SuperMUC at LRZ (one of the fastest supercomputers in Europe) and a 480 GBytes cluster (> 2500 cores) at RZG; and a commercial Cloud Service from T-Systems (not connected during ISC).

    The Big Picture

    The project links to SDN and NFV -- in particular as it is very active in NSI (Network-to-Service-Interface) definition. (For further information, see the link below to additional project information).

    Because the "distributed high speed GPFS" approach is of some universal nature (e.g. for HPC datacenter backup and HPC workload distribution reasons) the setup will be tested for commercial applicability during the post ISC phase. The possibility to use network functions like encryption, firewalling and data compression is definitely a must in a commercial case.

    Network appliances for 40 Gbit/s and 100 Gbit/s are neither available nor would they be affordable in many cases. Therefore we are going to test virtualized network functions on standard server hardware (see picture below); that additional "module," which is based on the same server hardware as the other servers, gets in between each server and the router on each side.

    NFV Module

    Please feel free to ask questions about any technical aspect of the project. The project team, which consists of some of the best specialists on their individual field, will be happy to answer. For additional project information: http://tu-dresden.de/zih/400gigabit.

    The Partners

    Eduard Beier, process systems engineer, T-Systems International

    (6)  | 
    Comment  | 
    Print  | 
  • Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
    RolfSperber
    50%
    50%
    RolfSperber,
    User Rank: Light Beer
    9/26/2013 | 10:11:00 AM
    Re: SDN and NFV
    Ray, we are at a very early stage, but abstracting from both hardware (and in consequence IOS) layer and utilizing a common framework (see NSI WG in OGF) and at the same time allow for docking of virtalized network functions will work for multi vendor, multi carrier and multi domain. Still, its a long way to go!
    Ray@LR
    50%
    50%
    Ray@LR,
    User Rank: Blogger
    9/26/2013 | 7:51:13 AM
    Re: SDN and NFV
    Rolf

    You say it is not restricted to a single domain, but is it applicable in networks that traverse multiple infrastructures run and managed by multiple network operators?
    RolfSperber
    100%
    0%
    RolfSperber,
    User Rank: Light Beer
    9/25/2013 | 8:43:49 AM
    Re: Is this another route in for NFV?
    Requirements in industry will not be so different from those in R&D. Looking at the plans in the context of Horizon 2020 Puplic Private Partnership is a target of European efforts. Taking into account the cost of infrastructure and the prevailing attitude to pay as little as possible for utilization of infrastructure, more sophisticated multiplexing methods, and this include NFV, are inevitable.

    For network operators this scheme means significantly reduced time to market.
    RolfSperber
    100%
    0%
    RolfSperber,
    User Rank: Light Beer
    9/25/2013 | 8:36:55 AM
    SDN and NFV
    In this project we will be going a step further. Our plan is to create an environment that allows for creating a virtual network based on the requirements of either applications or  carrier provided network functionality. We will not be restricted to a single domain and connectivity, our target is a network created from building blocks out of a repository. These building blocks can be connectivity with certain quality parameters or virtualized network functions like e.g. firwall functionality, compression, encryption, accelleration.

     

     
    Eddie_
    50%
    50%
    Eddie_,
    User Rank: Blogger
    8/21/2013 | 3:00:47 PM
    Re: Is this another route in for NFV?
    NFV simply scales better than HW based approaches (if NFV tests in September show good results).

    A possible 2013 roadmap for the project:
    • fully SDN controlled network
    • 200GBit/s datapath
    • NFV in the data path 

    a possible 2014 roadmap for the project:
    • scale up to a 1TBit/s data path

    how else could you do that?

     

     

     

     

     
    Ray@LR
    50%
    50%
    Ray@LR,
    User Rank: Blogger
    8/21/2013 | 1:01:52 PM
    Is this another route in for NFV?
    Interesting that functions virtualization takes the place of appliances that are either too expensive to deploy or have not yet been created... is NFV going to help the R&D sector more than production network operatins in the early years?
    More Blogs from Column
    As content and service providers turn more to streaming video, they should weigh the tax implications of their new services.
    When classic deep packet inspection isn't enough.
    With server-side DAI, content and service providers can make live TV as easy to target with data as VoD and SVoD.
    NFV is still behind in becoming cloud-native. A look at what cloud providers are doing with FPGAs should provide inspiration.
    Sensible regulations are needed as smaller radios get installed to facilitate 5G.
    Featured Video
    From The Founder
    Light Reading founder Steve Saunders grills Cisco's Roland Acra on how he's bringing automation to life inside the data center.
    Flash Poll
    Upcoming Live Events
    February 26-28, 2018, Santa Clara Convention Center, CA
    March 20-22, 2018, Denver Marriott Tech Center
    April 4, 2018, The Westin Dallas Downtown, Dallas
    May 14-17, 2018, Austin Convention Center
    All Upcoming Live Events
    Infographics
    SmartNICs aren't just about achieving scale. They also have a major impact in reducing CAPEX and OPEX requirements.
    Hot Topics
    Here's Pai in Your Eye
    Alan Breznick, Cable/Video Practice Leader, Light Reading, 12/11/2017
    Verizon's New Fios TV Is No More
    Mari Silbey, Senior Editor, Cable/Video, 12/12/2017
    Ericsson & Samsung to Supply Verizon With Fixed 5G Gear
    Dan Jones, Mobile Editor, 12/11/2017
    Juniper Turns Contrail Into a Platform for Multicloud
    Craig Matsumoto, Editor-in-Chief, Light Reading, 12/12/2017
    The Anatomy of Automation: Q&A With Cisco's Roland Acra
    Steve Saunders, Founder, Light Reading, 12/7/2017
    Animals with Phones
    Don't Fall Asleep on the Job! Click Here
    Live Digital Audio

    Understanding the full experience of women in technology requires starting at the collegiate level (or sooner) and studying the technologies women are involved with, company cultures they're part of and personal experiences of individuals.

    During this WiC radio show, we will talk with Nicole Engelbert, the director of Research & Analysis for Ovum Technology and a 23-year telecom industry veteran, about her experiences and perspectives on women in tech. Engelbert covers infrastructure, applications and industries for Ovum, but she is also involved in the research firm's higher education team and has helped colleges and universities globally leverage technology as a strategy for improving recruitment, retention and graduation performance.

    She will share her unique insight into the collegiate level, where women pursuing engineering and STEM-related degrees is dwindling. Engelbert will also reveal new, original Ovum research on the topics of artificial intelligence, the Internet of Things, security and augmented reality, as well as discuss what each of those technologies might mean for women in our field. As always, we'll also leave plenty of time to answer all your questions live on the air and chat board.

    Like Us on Facebook
    Twitter Feed