Validating ADVA's Virtual Switch

Independent test lab EANTC has put ADVA's Ensemble Connector virtual switch through its paces – find out what happened.

May 10, 2016

13 Min Read
Light Reading logo in a gray background | Light Reading

As Light Reading has already established, 2016 is set to be the year when NFV capabilities emerge from being at the heart of multiple proofs of concept (PoCs) and start to play a key role in next-generation communications service provider and large enterprise networks. (See NFV: Coming, Ready or Not!)

As a result, we're on course to witness many network operators move beyond just RFIs and RFPs and make their initial NFV-related procurement decisions.

That's why it's important for the vendor community to prove that their virtualization products are ready for commercial deployment and that they can deliver real benefits to the network operators that need to develop new business opportunities using such capabilities.

Independent test reports provide a key reference point for those operators, as Light Reading has noted before: They provide a valuable validation of capabilities that help to shorten the operator's own evaluation cycle.

That's why ADVA opted for an independent evaluation of its virtual switch, the Ensemble Connector, which can be deployed as the customer premises equipment (CPE) component of a New IP-enabled, NFV infrastructure (NFVi) that can support commercial virtual network functions (VNFs).

Figure 24:

Light Reading's long-time test lab partner EANTC devised the evaluation program and performed the tests, which measured the performance and scalability of the Ensemble Connector and compared it with the standard implementation of open vSwitch.

As you will read on the following pages, which describes the test processes and results in detail, ADVA has developed a virtual switch that not only performs much better than a regular open vSwitch implementation, but which is also designed to function in virtualized service provider environments.

You can find out more about the evaluation by accessing the archived webinar, ADVA’s Virtual Switch vs OVS: Test Results.

You can also download this report as a PDF: Just click on this link to access and download the document.

So let's get to the report, which is presented over a number of pages as follows:

Page 2: EANTC's introduction and test cases

Page 3: Throughput performance

Page 4: Scalability

Page 5: Link aggregation

Page 6: VLAN functionality

Page 7: Hardware and software in the test

— The Light Reading team and Carsten Rossenhövel, managing director, European Advanced Networking Test Center AG (EANTC) (http://www.eantc.de/), an independent test lab in Berlin. EANTC offers vendor-neutral network test facilities for manufacturers, service providers, and enterprises.

EANTC's introduction and test cases
The goal of the EANTC team's evaluation was to test the performance and scalability of the Ensemble Connector platform, a customer premises equipment (CPE) virtual switch coupled with an Intel® Atom™-based virtualization (NFV) platform.

Ensemble Connector is ADVA's proprietary virtual switch (vSwitch) implementation: As our tests showed, ADVA has successfully developed its vSwitch so that its performance is some way better than the standard implementation of the Open vSwitch with Data Plane Development Kit (DPDK) library (OVS-DPDK).

Our tests focused on a comparison of the forwarding performance of the two virtual switches, which were running on the same COTS server.

All tests were performed with a series of standard packet sizes ranging from 64 to 1518 octets, as well as a mix of packet sizes realistically resembling typical Internet traffic (IMIX). The IMIX we used in the test is defined as a proportional mix of several frame sizes, as specified in the following table:

Figure 17:

In addition to raw throughput performance we also measured the latency, an important factor to include when evaluating the performance of any service.

Further, we analyzed the scalability of the solution by running performance tests with a different number of simulated MAC addresses.

Finally, we verified the functional aspects of the solution -- the support for link aggregation and VLAN tagging.

EANTC partnered with Intel to implement test methodologies for benchmarking data plane performance on Intel architecture-based platforms that include key open source ingredients, including DPDK and OpenStack.

Next page: Throughput performance

Throughput performance
In this series of tests we evaluated the raw forwarding performance of Ensemble Connector, without an emphasis on any specific network function. We wanted to show the base performance that can be expected from Ensemble Connector for typical NFV setups. To achieve this, we performed the tests with a series of simple services, with one and two VNFs in a chain.

For the first VNF used in the throughput tests, we used a Layer 2 (L2) forwarder application that simply copies frames between two interfaces. The forwarding performance of these VNFs did not require special processing on the CPU and was mostly about forwarding performance of the underlying platform, drivers and the virtual switch.

The way the ADVA platform is set up, Carrier Ethernet-like services are defined between physical and virtual ports: The services can be E-Line and E-LAN types and interconnect two or more interfaces.

In order to forward frames between two physical Gigabit Ethernet interfaces used in the test, we defined two E-Line services between them and the virtual interfaces. Each following test setup was based on this arrangement and required an additional number of internal virtual services to be set up. As a result, the total forwarding performance of the platform was affected by how many services had to be operated in order to create the desired service chain.

The diagram below shows the two scenarios defined for the test. We used identical traffic and test procedures with each of the scenarios, to see how the performance is affected by the NFV service chain.

In the first scenario, we spawned a single VNF with the L2 forwarder and connected it to the two virtual ports where Ensemble Connector sent frames through the VNF. For the second scenario, we connected a second identical VNF and defined a service where traffic passes through both VNFs.

Figure 1: Test Setup

In ADVA's setup, dedicated CPU and memory resources were assigned to the VNFs, while the virtual switch has its own dedicated CPU core. The CPU resources consumed by the VNFs do not affect the switch performance. However, the increased number of interconnections in the more complex chain scenarios leads to increased processing time and interrupt rate for the switch process.

Both Ensemble Connector and OVS implementations are based on DPDK drivers and libraries that provide the virtualized applications with quick and versatile access to the network interfaces. DPDK bypasses the traditional networking subsystem where the data coming from the network would be processed by the host OS and later passed to the virtual instance. DPDK vhost-user ports (user space socket servers) were configured for the data-plane connectivity between the virtual switch and the guest OS.

For the test traffic generation, we used an Ixia XM12 analyzer with IxNetwork 7.50 software. We connected two Gigabit Ethernet ports to the physical ports on the Ensemble Connector and simulated bidirectional traffic between them.

The available bandwidth for the bidirectional traffic between the ports was 2 Gbit/s. Using an RFC2544 test method, we found the throughput value for each of the frame sizes by locating the maximum traffic rate where no loss occurs through binary search. At the same time, we measured the latency achieved at the maximum bandwidth.

The tables and diagrams below summarize the results achieved for the Ensemble Connector and Open vSwitch platforms as well as for both service chains.

Figure 18:

Figure 2: Ensemble Connector Throughput Performance

Figure 3: Ensemble Connector Latency

With one VNF, Ensemble Connector showed line-rate throughput for the packets of 512 bytes and above: With two VNFs, line-rate throughput for the packets was 256 bytes and above.

The latency measurements also show a nearly constant latency performance. As expected, the latency grows with the number of VNFs, as the traffic has to pass through more and more segments of the service chain. Still, it is nearly constant for all frame sizes and shows only minimal difference between average and maximum latency.

We repeated the same set of tests with the standard OVS implementation. In comparison to the results achieved in the Ensemble Connector tests, the OVS platform shows significantly lower throughput performance and higher latency. In addition, the OVS latency varied across packet sizes.

Figure 19:

Figure 4: OVS Throughput Performance

Figure 5: OVS Latency

Next page: Scalability

Scalability
For the next set of tests, we evaluated and compared the scalability performance of the Ensemble Connector and OVS implementations. As with any switch, the virtual switch should support basic Layer 2 features, including MAC learning. The challenge here is that as the number of MAC addresses learned from incoming traffic grows, so the performance of the switch can degrade. This is specifically relevant for the software-based solutions that lack comparable hardware-accelerated lookup caches.

We performed a series of throughput tests using virtual switch only, and without a VNF, using the same methodology as for the throughput tests described in the previous section. The service interconnecting the physical test ports on the DUT was configured as E-LAN, forcing it to learn the MAC addresses.

We performed three test runs for each device and frame size, with 2,600 and 1,200 MAC addresses, evenly distributed among the ports (1,300 and 600 addresses per port, respectively). The first configuration is identical to the configuration used in the previous throughput test. The other two runs provide a comparison on how much the performance is affected by increasing the number of MAC addresses.

At the beginning of the test, the analyzer transmitted learning frames for each of the simulated MAC addresses, filling the MAC address table of the device. Using the CLI of the Ensemble Connector, we verified that the device learned the exact number of MAC addresses as defined by our configuration.

Afterwards, we performed an RFC2544 throughput test between two test interfaces, exchanging up to 2 Gbit/s of traffic full-mesh between all simulated MAC addresses. As in the previous tests, we performed latency measurements.

The results of the tests are presented in the tables and diagrams below. This time, the difference between Ensemble Connector and OVS implementations was much more obvious.

Figure 20:

On Ensemble Connector, we observed only a minor degradation of the throughput performance and higher latency in case of the 64-byte frames. In all other cases the DUT showed a 100% line-rate performance and nearly constant latency.

Figure 6: Ensemble Connector Scalability Performance

Figure 7: Ensemble Connector Latency

On OVS implementation however, we saw a major degradation of performance with the rising number of MAC addresses. The DUT did not reach full line-rate performance for the packet sizes below 1024, which was possible with just two active MAC addresses. The latency also grew to become relatively very high, reaching close to 12 milliseconds.

Figure 21:

Figure 8: OVS Scalability Performance

Figure 9: OVS Latency

Next page: Link aggregation

Link aggregation
So far, all performance tests we executed involved only two physical ports on the Ensemble Connector device and traffic with the total maximum bandwidth of 2 Gbit/s. The link aggregation capability is available on both Ensemble Connector and Open vSwitch implementation and gives us the possibility to combine multiple physical ports into a common interface and extend the bandwidth provided by the device.

For this test, we defined two link aggregation groups on the device, combining each set of two ports into one group. Each LAG was connected to a virtual port, and an E-Line service was established between them. With this setup, we performed a similar throughput test as before, but with the bandwidth goal extended to 4 Gbit/s.

Figure 10: Link Aggregation Test Setup

The summary of the measurement results is presented in the table and the diagrams below. For both Ensemble Connector and OVS platforms we observed similar throughput performance. However, latency fluctuated to a much greater degree with OVS. Although the average latency was comparable or even better than Ensemble Connector's, the maximum latency was much higher. In the test where we used a mix of different frame sizes, OVS showed exceptionally high latency, approaching 10 milliseconds.

Figure 22:

Figure 11: Ensemble Connector vs. OVS Throughput Performance

Figure 12: Ensemble Connector vs. OVS Latency

VLAN functionality
In our final series of the tests, we evaluated the VLAN tag manipulation capabilities of the Ensemble Connector. The Ensemble Connector platform is designed with Carrier Ethernet features in mind and can serve as a CPE in a Carrier Ethernet scenario, providing UNI (User Network Interface) and/or ENNI (External Network to Network Interface) interfaces.

As such, the device must conform to the MEF standards and provide VLAN Tagging features not always available on generic switches. Since these features are not fully provided by the Open vSwitch implementation, we limited the test case to Ensemble Connector only.

We validated the two tagging modes defined for MEF interfaces -- 802.1q (single-tagged frames on UNI interfaces) and 802.1ad (double-tagged frames on ENNI). In addition we tested the Q-in-Q mode for the double-tagged frames, which does not adhere to the MEF standards but which are often encountered in many environments.

We verified that the Ensemble Connector is able to add, remove and exchange VLAN tags as specified. In case of double-tagged frames, Ensemble Connector was also able to manipulate the inner tag ("C-Tag") as needed.

Next page: Hardware and software in the test

Hardware and software in the test

Figure 23:

The Intel Atom Processor C2000 product family offers a range of multi-core processing capabilities and features high levels of I/O and acceleration integration, resulting in a scalable, single-chip system-on-chip (SoC). When paired with the Data Plane Development Kit (DPDK), this platform improves packet processing speeds to handle increased network traffic data rates and associated control and signaling infrastructure requirements.

Read more about:

AsiaEurope
Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like