x
NFV (Network functions virtualization)

Does NFV Have a Packet Processing Problem?

Is there a packet performance problem lurking behind NFV that could make the deployment of virtual functions in a typical IT-based cloud environment that much more complicated?

Metaswitch Networks thinks so, and the vendor is hoping to draw attention to what it sees as performance problems that could challenge the implementation of NFV, according to the company's CTO Martin Taylor.

By raising the issue and also laying out software and hardware fixes, Metaswitch wants to get telecom service providers talking about it and sharing their thoughts on which approach they prefer.

Taylor first spoke to me of his company's concerns back in November. Now he's brought them to light in his Metaswitch blog, which you can read here.

Specifically, Metaswitch found in its testing that network-intensive functions, such as those associated with user plane functionality -- versus control plane -- suffer performance issues when they are implemented in a typical IT-based cloud environment because that type of infrastructure is limited in its ability to handle heavy network workloads.

Many of its telecom service provider customers aren't aware of those limitations, Taylor says, and may be in for a nasty shock when they try to virtualize some functions on an OpenStack environment, for example.

"When we talk to telcos about what they are doing with OpenStack and the cloud environment for NFV, we find that they are obsessed with orchestration," Taylor told me back in November. "But our experience shows that there are these pretty serious performance issues lurking under the covers. Telcos might get a nasty shock when trying to do some virtualized network functions."


For more on NFV, head over to our dedicated NFV content channel here on Light Reading.


Metaswitch encountered these issues in the process of running its Perimeta session border controller, which handles both control plane and user plane functions. While the SBC runs fine on bare metal, its performance in a virtualized environment suffers because of the inefficiency of the data path between the physical network and the virtual machines. That path is provided by Open vSwitch software, which grew up in the IT environment and can't handle the million packets per second workload of a network SBC, according to Taylor.

The hardware fix essentially bypasses the vSwitch by establishing a data path between the physical network and the VM using Ethernet NIC (network interface card) technology called Single Root Input/Output Virtualization (SR-IOV). This approach is probably the most efficient, but it doesn't support overlay-based network virtualization, which is commonly used in migrating massive telecom networks to virtualization.

The software fix, which does support overlays but is less efficient than the hardware option, uses a commercially accelerated vSwitch to create a software-based data path between a conventional Ethernet NIC and the virtual machines, Taylor explains. The shared memory can be used to reduce or eliminate packet copying, thus make processing more efficient.

"We need the telcos to start to home in on what are their preferred options," he said. "We want the market to choose one."

— Carol Wilson, Editor-at-Large, Light Reading

<<   <   Page 2 / 2
cnwedit 1/6/2015 | 11:50:19 AM
Re: Options Thanks for the feedback. One of the reasons I didn't write this up earlier was that my initial inquiries in following up on the Metaswitch info drew the email/voice mail equivalent of blank stares. Apparently, I was asking the wrong people. 
dwx 1/6/2015 | 11:33:59 AM
Options I think just about everyone is aware of the processing limitations... 

SR-IOV is the key to using the Intel DPDK and fast processing, and right now there isn't good support in the hypervisor software for virtualizing the functions, so you do end up with a 1:1 mapping of virtual NIC to a physical NIC basically bypassing the virtualization layer.   There are NIC options where the NIC HW itself is virtualized and presents itself as multiple NICs even to the hypervisor, but the 1:1 mapping of physical to VM is still not ideal for spinning up instances.   I think things will get there eventually and you wil see OVS along with Juniper's Contrail VRouter and probably ALU's Nuage support the ability to present SR-IOV to an underlying VM.  

There are also acceleration options like Netronome's FlowNIC which is a network processor on a PCI card that already supports accelerating OVS.   We may see more of that come around due to the limitations of packet processing with general CPUs   General CPUs also use a lot more power than dedicated packet hardware.  
<<   <   Page 2 / 2
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE