x
NFV Specs/Open Source

Validating ADVA's Virtual Switch

Throughput performance
In this series of tests we evaluated the raw forwarding performance of Ensemble Connector, without an emphasis on any specific network function. We wanted to show the base performance that can be expected from Ensemble Connector for typical NFV setups. To achieve this, we performed the tests with a series of simple services, with one and two VNFs in a chain.

For the first VNF used in the throughput tests, we used a Layer 2 (L2) forwarder application that simply copies frames between two interfaces. The forwarding performance of these VNFs did not require special processing on the CPU and was mostly about forwarding performance of the underlying platform, drivers and the virtual switch.

The way the ADVA platform is set up, Carrier Ethernet-like services are defined between physical and virtual ports: The services can be E-Line and E-LAN types and interconnect two or more interfaces.

In order to forward frames between two physical Gigabit Ethernet interfaces used in the test, we defined two E-Line services between them and the virtual interfaces. Each following test setup was based on this arrangement and required an additional number of internal virtual services to be set up. As a result, the total forwarding performance of the platform was affected by how many services had to be operated in order to create the desired service chain.

The diagram below shows the two scenarios defined for the test. We used identical traffic and test procedures with each of the scenarios, to see how the performance is affected by the NFV service chain.

In the first scenario, we spawned a single VNF with the L2 forwarder and connected it to the two virtual ports where Ensemble Connector sent frames through the VNF. For the second scenario, we connected a second identical VNF and defined a service where traffic passes through both VNFs.

Test Setup

In ADVA's setup, dedicated CPU and memory resources were assigned to the VNFs, while the virtual switch has its own dedicated CPU core. The CPU resources consumed by the VNFs do not affect the switch performance. However, the increased number of interconnections in the more complex chain scenarios leads to increased processing time and interrupt rate for the switch process.

Both Ensemble Connector and OVS implementations are based on DPDK drivers and libraries that provide the virtualized applications with quick and versatile access to the network interfaces. DPDK bypasses the traditional networking subsystem where the data coming from the network would be processed by the host OS and later passed to the virtual instance. DPDK vhost-user ports (user space socket servers) were configured for the data-plane connectivity between the virtual switch and the guest OS.

For the test traffic generation, we used an Ixia XM12 analyzer with IxNetwork 7.50 software. We connected two Gigabit Ethernet ports to the physical ports on the Ensemble Connector and simulated bidirectional traffic between them.

The available bandwidth for the bidirectional traffic between the ports was 2 Gbit/s. Using an RFC2544 test method, we found the throughput value for each of the frame sizes by locating the maximum traffic rate where no loss occurs through binary search. At the same time, we measured the latency achieved at the maximum bandwidth.

The tables and diagrams below summarize the results achieved for the Ensemble Connector and Open vSwitch platforms as well as for both service chains.

Ensemble Connector Throughput Performance

Ensemble Connector Latency

With one VNF, Ensemble Connector showed line-rate throughput for the packets of 512 bytes and above: With two VNFs, line-rate throughput for the packets was 256 bytes and above.

The latency measurements also show a nearly constant latency performance. As expected, the latency grows with the number of VNFs, as the traffic has to pass through more and more segments of the service chain. Still, it is nearly constant for all frame sizes and shows only minimal difference between average and maximum latency.

We repeated the same set of tests with the standard OVS implementation. In comparison to the results achieved in the Ensemble Connector tests, the OVS platform shows significantly lower throughput performance and higher latency. In addition, the OVS latency varied across packet sizes.

OVS Throughput Performance

OVS Latency

Next page: Scalability

Previous Page
3 of 7
Next Page
Be the first to post a comment regarding this story.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE