What is the best way to support both layer 2 and layer 3 services at the edge of the network that incorporates virtual network functions?

Prayson Pate, CTO, Edge Cloud, ADVA

May 8, 2015

5 Min Read
The Case For Pure Play Virtualization

Service providers are embracing NFV to drive profitability by lowering costs and creating innovative new services. In doing so there are many good reasons to push the resources for NFV-based Layer 3 service to edge of the network. There is also a need to also deliver Layer 2 Carrier Ethernet 2.0 (CE 2.0) services, both for the end user as well as for the infrastructure for Layer 3 services.

The question is: What is the best way to support both Layer 2 and Layer 3 services at the edge of the network? Following are some options.

Virtualization using discrete devices
One possibility is to use two devices: An open server and an Ethernet Access Device (EAD). In this case the open server would host any Layer 3 virtual network functions (VNFs) while the EAD would terminate the network traffic and provide for Layer 2 functions.

Figure 1:

  • Pros:

  • This approach has the benefit of relying on existing devices and is consistent with the ETSI NFV ISG.

  • The performance of existing EADs is well known and deterministic.

    Cons:

  • The obvious drawback to this approach is two separate devices are required.

Virtualization using a hybrid device
Another approach is to combine the hardware functionality of an EAD with that of an open server. This could be implemented in several ways:

1. An existing EAD could be augmented with an add-on pluggable compute module.

Figure 2:

2. An existing server could be augmented with a plug-in EAD contained in a SFP.

Figure 3:

3. A hybrid EAD/server could be constructed with separate compute and EAD elements. This approach is similar to #1 above, but has the advantages of higher-performance connections between the EAD and compute elements, as well as using a larger and higher performance compute node.

  • Pros:

  • The hybrid approach is an improvement over separate components in terms of units, size and cost.

  • The hybrid approach eases the task of ensuring deterministic packet delivery by use of hardware for forwarding.

    Cons:

  • This approach is necessarily limited in one of its aspects. Either in its compute power and flexibility, or in its ability to change how its EAD function behaves.

Pure-play virtualization
The previous two approaches are evolutionary. We are taking the revolutionary stance of advocating that if you are going to virtualize, go all the way. In this case, we need to virtualize the EAD, so it can run with the other service VNFs.

Figure 4:

As shown, the virtual EAD is running on a standard open server. Some advantages of this approach:

  • Consistent with ETSI NFV in use of standard hardware and generic software VNFs.

    • Ability for performance to scale with the power of the server.

    • Ability to put EAD functionality in different locations, including within data center clouds.

      What about performance?
      One potential drawback is performance. Can a software implementation of CE 2.0 functionality provide the needed performance in terms of throughput, loss and latency? The answer is yes -- if you use the latest tools and technologies, with performance in mind. We recommend that you:

    • Design for multi-core processors. This means splitting control and management from the datapath, and creating scalability by using multiple cores.

    • Use acceleration technologies for packet acceleration. Candidates include SRIOV (single root I/O virtualization) and Intel's DPDK (Data Plane Development Kit).

    • Address the performance issues of Open vSwitch (OVS.) Replacing OVS is the only way to achieve this today.

      Comparison of approaches
      The table below compares and contrasts the approaches described above.

      Approach

      Separate Devices

      Hybrid

      Pure Play

      Consistency with ETSI NFV

      Best

      Good

      Best

      Cost

      No

      Good

      Best

      Layer 2 Throughput

      Best

      Best

      Yes (optimized)

      No (open vSwitch)

      Scalability

      Best

      No (requires redesign)

      Best

      Applicability to Cloud

      No

      No

      Best

      Go virtual, all the way
      The vision for NFV is to enable the use of standard and open compute servers to host a variety of mix-and-match software VNFs. To date, the focus has been on Layer 3 services and VNFs. To fully achieve the NFV vision means delivering these VNFs and services in a scalable, cloud-based manner to all parts of the network. That means addressing the question of Layer 2 functionality.

      As I have described above there are three different ways to provide Layer 2 functionality. While the discrete and hybrid approaches have some advantages (especially in the near term), fully achieving this vision requires inclusion of Layer 2 capabilities as a software module rather than as a hardware adjunct. Doing so enables complete flexibility in delivery of services at any part of the network, and ensures that operators will be able to ride the technology curve of open servers. The result is a simultaneous lowering of costs and enablement of new services, which combine to drive profitability.

      — Prayson Pate, CTO, Overture Networks Inc.

About the Author(s)

Prayson Pate

CTO, Edge Cloud, ADVA

Prayson is passionate about helping the telecom industry go virtual. He believes network functions virtualization (NFV) is the key to profitability and evangelizes about essential benefits achieved by using big data analytics and pure-play NFV orchestration to optimize a carrier-class virtual networking environment. Prayson speaks about NFV and software-defined networking at industry events, posts regularly to industry blogs and his own site www.praysonpate.com, and contributes articles for publication in industry media. As CTO at Overture he is focused on making NFV real and profitable for communication service providers (CSPs.) Prayson is an accomplished technologist who began his career in 1983 at FiberLAN. He also worked for Bell Northern Research and Larscom before co-founding Overture in 2000.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like