Virtual switch FIB scalability
- Evaluation overview: In a pure virtual switching scenario, VPP showed no throughput degradation when forwarding to 2,000 IPv4 or Ethernet MAC addresses at 20 Gbit/s, less than 1% reduced throughput towards 20,000 MAC addresses and 23% reduced throughput when forwarding to 20,000 IP addresses. OVS IPv4 and Ethernet forwarding was reduced by 81% when forwarding to 2,000 IPv4 addresses.
Both previous scenarios focused exclusively on Ethernet layer throughput with a small number of flows because they both involved virtual machines. In the third scenario, we evaluated pure virtual switching without any actual application. This is, of course not a realistic application setup: instead it is a reference test of vSwitch properties that need to be determined independently of VNF performance.
One of the most basic and important scalability figures of an Ethernet switch is its ability to handle many Ethernet flows between different endpoints (associated with MAC addresses) in parallel. In a data center, there is usually much more East-West traffic between servers and services directly connected on the Ethernet segment than there is North-South traffic. vSwitches need to enable virtual services to participate in data center communication and need to be able to connect to many Ethernet destinations in parallel.
Separately, vSwitches obviously need to support many IP addresses in their forwarding information base table (FIB) simultaneously, when configured for IP forwarding. If traffic is routed towards a virtual firewall or a virtualized packet filter, there are usually tens of thousands of flows involved from thousands of IP addresses.
We verified forwarding performance of the standalone virtual switch with multiple layer 2/layer 3 FIB table sizes.
VPP showed its strengths based on the optimized, vector-based handling of large tables. It achieved very consistent IPv4 forwarding: The throughput was not dropped at all when forwarding to 2,000 IPv4 addresses compared to a single address scenario; throughput dropped only by 23 % when forwarding to 20,000 IPv4 addresses. The average forwarding latency was largely unaffected by the larger tables, and the maximum IPv4 forwarding latency was still bearable at 400 microseconds for 2,000 address entries and 1,200 microseconds for 20,000 entries.
In contrast, Open vSwitch seems to use less optimized FIB lookups. Throughput dropped from 20 Gbit/s (IPv4) and 8.8 Gbit/s (Ethernet) for the single-entry case down to around 4 Gbit/s in both cases for 2,000 IPv4 and MAC addresses. For 20,000 FIB entries the OVS-DPDK implementation was not usable in its current version as the throughput dropped to almost zero and maximum latency skyrocketed to 37 milliseconds (not microseconds!). We are not complaining -- after all, it's free software -- and in fact we hope that Cisco will contribute its VPP improvements back to OVS to improve the open source software.
These results indicate much better VPP performance in higher scale network deployments.
Next page: Evaluating the Cisco IOS XRv 9000
Thanks!