10 Myths About NFV (Mostly) Dispelled
The transition toward network functions virtualization (NFV) is in progress and, as with any technology transition, companies are proceeding with caution. The trick is figuring out which anticipated hazards are real, if any of those have already been cleared, and making sure you don't get frozen by hazards you anticipated but which failed to materialize.
Make no mistake, the course ahead is tricky. Evangelists for new technologies can get overly enthusiastic, envisioning everyone at the finish line before anyone has cleared all the technological barriers. And then there are those companies who take off without surveying the course ahead -- they're the ones that bumble into hurdles they could have anticipated if they'd simply prepared better.
At the same time, companies can be too hesitant to explore their options, based on suspicions that appear reasonable but aren't merited by the actual situation.
Based on its experience with customers, Luxoft, which provides development, test and evaluation services to vendors and operators, keeps coming across ten commonly held myths about NFV. This article is adapted from a presentation made by Luxoft executive director Tomy Issa, and comments he made in a subsequent conversation with Light Reading.
Issa has served stints with Allied Telesis, Nortel, and most recently with the operations of Tektronix, Fluke and Arbor Networks that are now owned by Netscout.
MYTH 1: Existing applications become "NFV ready" when the software is ported to run on a virtual machine
"We have heard that from both established vendors and startups," Issa lamented.
Porting code running on some piece of specialized hardware to run on an x86-based virtual machine (VM) is possible, but it emphatically does not automatically make that virtual app NFV ready. Luxoft said it worked with one company that tried to migrate software composed of a few million lines of code to run on a VM, only to find out that the resulting virtual appliance consumed huge amounts of CPU and memory, and was simply not scalable.
The best way to build a carrier-grade virtual network function (VNF) is to take a ground-up approach, starting with a purposefully designed modular architecture that addresses performance, scalability and other important requirements, Luxoft recommends.
Traditional carrier-grade design was rigorous, sometimes including design for manufacturing and design for testability. Successful design of VNFs should adopt those same methodologies and augment them with a new one: design for virtualization.
Design for virtualization includes a new set of design attributes such as:
Other considerations include
A VNF developer who adopts this approach and these methodologies will be able to demonstrate to prospective customers that they will be able to scale up, evolve the VNF, and grow their virtual environments with confidence that interoperability will not be an issue.
MYTH 2: Virtual appliances cannot perform as well as their physical counterparts
This is one of those things that seems a reasonable assumption but -- even with the current state of technology -- is not true, Issa asserts.
He cites several examples, including Brocade Communications Systems Inc. (Nasdaq: BRCD), which recently demonstrated a VNF with 40 Gbit/s throughput. He adds, "I met with a number of startups with virtual appliances, such as carrier-grade NATs and BGCs, that have proven performances comparable to their counterparts in physical appliance world."
One of the key enablers for the ability to build high-performing virtual applications is the data plane development kit (DPDK), introduced by Intel in 2012. It has become a common acceleration platform for most hypervisors, Issa said.
There are also several coordinated efforts. One such is Enhanced Platform Awareness (EPA) led by Telefónica , and aimed at building frameworks for end-to-end services comprising multiple virtual appliances with carrier-level SLAs. The early results are "quite encouraging," according to Issa.
Issa allows that not all network functions can be virtualized. As a rule of thumb, he says, 80% of the services can be performed in an NFV environment while 20% of the services cannot.
Myth 2 has a natural corollary in Myth 3.
MYTH 3: Mission-critical network services should still be run only on dedicated physical appliances
Even if you know that the performance of virtual appliances can match that of dedicated machines, you might still suspect that the performance of virtual appliances cannot be guaranteed.
Not necessarily so. Even if you have a service that is highly sensitive to packet delay or packet loss, there are toolkits and methodologies available that can be used to bypass the VM to produce a version of the physical machine.
One of those widely adopted methodologies or techniques is called non-uniform memory access (NUMA), which can achieve sub-millisecond delay performance for real-time use cases.
"We can identify pools of resources based on some characteristics and group them together, group processors together based of threads, locate and eliminate delay contributors. That requires a good background in software engineering skills, but it is doable," Issa says.
Another technique is single-route IO virtualization for PCI-X, which allows developers to bypass the host virtual switch and to connect directly to the NIC. In practice, it demonstrates almost 100% of bare metal performance.
Next page: Making money from NFV and network visibility