They go together hand-in-hand in many networks but the relationship is strained – is there a way to improve the efficiency of an OpenStack and OVS deployment?

Chloe Jian Ma, Senior Director, Cloud Market Development, Mellanox Technologies

October 6, 2015

6 Min Read
OpenStack & OVS: From Love-Hate Relationship to Match Made in Heaven

There are many high-profile love-hate relationships around us, one of the most noteworthy being between two of the billionaire Silicon Valley CEOs, Larry Ellison and Marc Benioff.

Benioff started working at Oracle Corp. (Nasdaq: ORCL) when he was 23. A rising star, he became its youngest VP at the age of 26. During his 13 years at Oracle, Benioff became one of Ellison's closest friends and most trusted lieutenants. Fortune magazine described their relationship thus: "They sailed to the Mediterranean on Ellison's yacht, visited Japan during cherry blossom season, spent Thanksgiving together, and even double-dated."

But once Benioff's Salesforce.com Inc. grew up to be a strong rival of Oracle, the two engaged in public spats, openly bashing each other's companies. Ellison repeatedly belittles the growth of Salesforce's SaaS software, calling it an "itty bitty application" mostly running on Oracle's databases, while Benioff shrugged off Oracle's software as "false cloud."

In the cloud space, we see a similar relationship between OpenStack and Open vSwitch (aka OVS). They obviously love each other and are almost inseparable -- 46% of all production OpenStack deployments include OVS as the network driver (see Figure 1: May 2015 OpenStack User Survey Results on Neutron Driver), according to OpenStack user survey results published in May 2015. This makes OVS the most popular networking plugin for OpenStack cloud.

Figure 1: May 2015 OpenStack User Survey Results on Neutron Driver

If we dig a little deeper, OpenStack and OVS were designed to address different issues: OpenStack grew up in the "cloud" family and is the clear winner among the open source cloud management and orchestration platforms that provide computing resources-as-a-service over a network; OVS is an offspring of "virtualization" -- specifically, compute resource virtualization to enable multiple virtual machines (VMs) running on the same physical server to communicate with each other. It is not by coincidence that cloud and virtualization are often discussed together, since virtualization improves the resource efficiency that most clouds desire.

But boy, oh boy, the OpenStack clan has a lot of nasty things to say about OVS, especially when cloud resources are being built for high performance and scale. The top complaints include:

  • Man, it is slow!: Let's face it, cloud builders are not using the same old servers with 100 Mbit/s or 1Gbit/s any more. According to Crehan Research, we are reaching the inflection point where combined high-speed Ethernet over 10 Gbit/s is going to exceed 50% of overall server-class NIC (network interface card) shipments (see Figure 2: Server-Class Adapter and LOM Shipments). Cloud service providers are leading the adoption of 25, 40, 50 and even 100Gbit/s server NICs to enhance overall infrastructure efficiency. With high-speed server I/O, multiple packets can arrive every microsecond and vanilla OVS just can't keep up. Without acceleration, OVS is achieving about 500,000 packets per second (pps) on a 10Gbit/s link where theoretical maximum packet rate can be 15 million pps. In certain real application scenarios where telco virtualized network functions (VNFs) are deployed and traffic is dominated by small voice packets, OVS can only achieve 1/80th -- that's right, one eightieth! -- of bare metal I/O performance.

Figure 2: Server-Class Adapter and LOM Shipments

  • What do you mean it drops my packets?: Even before OVS reaches a complete stop and can't forward any further packets, things can slow down significantly. Queues build up, latency skyrockets and packets can be dropped. This, reflected in some real-time applications, such as VoIP, would affect customer experience in the form of sound quality degradation, pauses or dropped calls.

  • It burns CPU like there is no tomorrow!: Once upon a time, all packet processing was done in the so-called slow path (aka CPU) in Cisco's routers. But no router or switch from any reputable networking vendor today is doing packet forwarding with the CPU any more. Instead, packet processing and forwarding are offloaded to a hardware fast path, normally implemented in ASICs or network processors. Bare metal servers can achieve much higher packet I/O performance because the majority of packet forwarding can be offloaded to the NIC. However, with compute and network virtualization, because of the path that packets need to traverse within a server, and because packet formats are changed, not all NICs can perform the offload required. When packet processing (including checksum/CRC calculation and encapsulation/de-capsulation) is performed by the CPU, it wreaks havoc: Multiple CPU cores now need to shift from application processing to packet processing, reducing the overall efficiency of the infrastructure.

So how can we change the dynamics of this relationship from bitter love to harmonic happily-ever-after? The answer is to offload certain packet forwarding tasks from OVS to the NIC hardware, but without sacrificing network programmability.

Let's take a look at the current OVS architecture (Figure 3: OVS Architecture). It consists of three main components:

Figure 3: OVS Architecture

  • ovs-vswitchd -- Open vSwitch daemon (Slow Path): This is the software module usually running in user space to talk to a control cluster, which oftentimes include network management modules and an SDN controller, to take remote network configuration and program that into the kernel fast path.

  • ovsdb-server -- Open vSwitch database server where OVS switch-level configuration and policy information is stored.

  • openvswitch_mod.ko -- kernel module (Fast Path): This is the software module usually running in OS or hypervisor kernel to actually perform packet processing.

With this design, the first packet of each flow will be punted from fast path to slow path to resolve the forwarding information. Once the route is resolved, it is programmed into the fast path, so subsequent packets hit cached entry in the kernel and can be forwarded in the fast path. (See Figure 4: Packet Forwarding in OVS).

Figure 4: Packet Forwarding in OVS

When part or all of the OVS data path can be offloaded to the OVS offload engine embedded in the NIC, one additional layer is added as shown in Figure 5 (Figure 5: Packet Forwarding with OVS Offload). The first packet of each flow will be punted from OVS offload engine to OVS kernel module and ultimately the ovs-vswitchd in the user space. Forwarding entry is resolved and programmed into the fast path and gets cached in the NIC hardware. Subsequent packets will hit cache entry in the NIC hardware and be forwarded by the NIC.

Figure 5: Packet Forwarding with OVS Offload

Of course, you can't have your cake and eat it too, and expecting the NIC hardware to offload every single new feature that can be supported in software is unrealistic. But as long as the majority of packets can be offloaded and handled by the NIC hardware, you get network programmability and flexibility, good, deterministic I/O performance and higher system efficiency stemmed from more CPU resources freed up from packet processing. To me, that makes OpenStack and OVS a perfect match made in heaven.

— Chloe Jian Ma, Senior Director, Cloud Market Development, Mellanox Technologies Ltd. (Nasdaq: MLNX)

About the Author(s)

Chloe Jian Ma

Senior Director, Cloud Market Development, Mellanox Technologies

Chloe Jian Ma is Senior Director of Marketing leading Cloud Market Development at Mellanox Technologies, where she is responsible for driving awareness, thought leadership and adoption of Mellanox CloudX reference architecture and solutions for open and efficient cloud build-out. Her technical background spans cloud, virtualization, telecom and enterprise hardware and software. She holds an MBA in Marketing and Strategy from University of Pennsylvania - The Wharton School, an MSEE from the University of Southern California, and a BS, Electronics from Peking University, Beijing, China.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like