Assuring Performance in Hybrid Virtualized Networks

Can network appliances be virtualized and still provide high performance at high speeds? That was one of the key questions left open at the end of the previous installment in my series of blogs, when we took a closer look at network appliances and how they can be used to provide real-time insight for management and security of SDN/NFV networks. (See Managing SDN & NFV With Real-Time Insight.)

In many ways, network appliances lend themselves very well to virtualization. They are already based on standard server hardware with applications that are designed to run on x86 CPU architectures. The issue is performance.

Even for physical network appliances, performance at high speed is an issue. That is why most high-performance appliances use analysis acceleration hardware. While analysis acceleration hardware does free up CPU cycles for more analysis processing, most network appliances still use all the CPU processing power available to perform their tasks.

From a virtualization point of view, this means that virtualization of appliances can only be performed to a certain extent. If the data rate and the amount of data to be processed is low, then a virtual appliance can be used (even on the same server as the clients being monitored).

However, once the data rate and volume increase, the CPU processing requirements for the virtual appliance increases. At first, this will mean that the virtual appliance will need exclusive access to all the CPU resources available. But, even then, it will run into some of the same performance issues as physical network appliances using standard NIC interfaces with regard to packet loss, precise time-stamping capabilities and efficient load balancing across the multiple CPU cores available.

The fact of the matter is that virtualization of appliances cannot escape the constraints that network appliances face in the physical world. These same constraints will be an issue in the virtualized world and must be dealt with accordingly.

One way of addressing this issue is to consider the use of physical appliances to monitor and secure virtual networks. Virtualization-aware network appliances can be "service-chained" with virtual clients as part of the service definition. It requires that the network appliance can identify virtual networks, which is typically done using VLAN encapsulation today, a method already broadly supported by high-performance appliances and analysis acceleration hardware. This enables the appliance to provide its analysis functionality in relation to the specific VLAN and virtual network.

This can be a very useful solution in a practical phased approach to SDN and NFV migration. It is broadly accepted that there are certain high-performance functions in the network that will be difficult to virtualize currently without taking a considerable performance hit. Pragmatic solutions are therefore advocating an SDN and NFV management and orchestration approach that takes account of physical and virtual network elements. This means that policy and configuration do not have to concern themselves with whether the resource is virtualized or not, but can use the same mechanisms to "service-chain" the elements as required.

We should therefore expect that the introduction of SDN and NFV will require a mixture of existing and new solutions for management and security under a common framework with common interfaces and topology mechanisms. With this in place, functions can be virtualized when and where it makes sense without affecting the overall framework or processes.

— Dan Joe Barry, VP of Positioning and Chief Evangelist, Napatech

gzecheru 9/8/2015 | 5:53:28 PM
Re: Virtualization layers Here is another example of test results from EANTC (focused on IPsec)


Ixia has a whitepapers that discusses how to measure the performance of virtual network functions (VNFs) and compare it to its physical counterparts




prayson.pate 9/8/2015 | 3:34:12 PM
Re: Virtualization layers I agree that it is difficult to virtualize layer 2 functions and run them in software on a standard server.  However, difficult is not impossible. 

With good software design and the judicious use of cores we have shown that gigabit performance with low ltency and jitter is possible on small Atom-based servers.  This performance includes standard Carrier Ethernet functions such as SOAM.  See the report here: http://www.overturenetworks.com/wp-content/uploads/2015/08/Overture_NFV_Performance_TestResults_Final.pdf

There are cases that call for hardware-based acceleration. However, there are many more that can benefit from a pure-play approach using standard servers.  These applications will scale up in speed and down in cost as processor technology advances.
t.bogataj 9/3/2015 | 1:27:56 PM
Re: Virtualization layers Sterling,

In terms of monitoring (now that you mentioned it) I agree that responsiveness of SW implementation is a bigger challenge than with HW-accelerated processing. CFM and EFM OAM make a good example: not so much the throughput-related issues of SW implementation, but more so the responsiveness (i.e. delay) can be an challenge.

Sterling Perrin 9/3/2015 | 9:36:04 AM
Re: Virtualization layers t.bogataj,

You are correct, I was referring to forwarding plane not control plane. SDN control of Ethernet networks is definitely doable and, in fact, is generally the first place telecom providers have applied SDN control in their networks. 

With virtualization (specifically using COTS HW), I see challenges at Layer 2, and that is what I'm referring to in my previous post. One issue is the ability of CPU's to handle the layer 3 functions an the layer 2 functions and maintain performance. Another one I am seeing is that specialized HW is used in Layer 2 networks for statistics gathering, monitoring, etc. and I don't see operators wanting to eliminate that visibility. It's becoming more and more important for their services and for their customers.

t.bogataj 9/3/2015 | 7:44:21 AM
Re: Virtualization layers Sterling,

I guess you are referring to data-plane when comparing L2 and L3. If so, I see no reason to claim that L2 is more performance-sesnitive than L3. Rather the opposite. On L2, after a lookup (VLAN+MAC) is done, small alterations (may) take place (add/remove tags) and the Ethernet frame is sent to egress queue/port. On L3, after a lookup (IP) is done, alterations take place and the IP packet is sent to egress queue/port. But the alterations on L3 are more extensive that on L2 only: strip the ingress frame of Ethernet headers, modify IP-checksum, recalsulate UDP checksum (!) if payload is UDP, add a complete Ethernet header at egress.... in the simplest scenario. More work on L3 than on L2.

Anyway, once the whole world gets software-defined, the boundaries between L2 and L3 will disappear: all data-plane lookups will be L1-L4, and all manipulations rule-based.

If I had misunderstood your point completely -- and you were referring to control plane -- than performance is not an issue at all.

danjoe 9/3/2015 | 4:18:23 AM
Re: Virtualization layers Thanks Sterling

You are absolutely correct and I think we need to be pragmatic as the industry must make progress on SDN and NFV. Not just from a PoC point of view but broad adoption and deployment. But, to do that, we need to be practical about what we can do today and what we should aim to improve tomorrow. Let's use all the available tools to make this work rather than waiting for the perfect solution. 

Remember, SDN and NFV were inspired by OTT players with web-scale datacenters and DevOps approaches. These guys are very pragmatic and will use whatever technology or solution that provides an improvement in cost or performance. They see technology as a means to an end and not an end in itself.

Perhaps something we should keep in mind as we move forward...

Dan Joe
Sterling Perrin 9/2/2015 | 11:23:44 AM
Virtualization layers Dan Joe,

Great piece. As a hardware guy, always think of networking in terms of the OSI layers. Virtualizing layer 3 is an easy choice: operators are eliminating layer 3 appliances and running them in software over layer 2 Ethernet networks.

Not virtualizing layer 0/1 is an easy choice: Optics is an analog world and as many now observe: you can't virtualize a photon. So optics remains in the physical world. 

But Layer 2 is the big grey area for virtualization: Some wnat to virtualize all of Layer 2/Ethernet but others are realizing (and I think you make this point in your blog) there are alot of sacrifices on performance and management in doing so. I think virtualization of Layer 2 will be the area of debate over the next year. 

Sign In