10 Myths About NFV (Mostly) Dispelled

A number of assumptions and myths have sprung up around NFV during the past few years, all of which are worth unpicking.

Brian Santo, Senior editor, Test & Measurement / Components, Light Reading

July 12, 2016

13 Min Read
10 Myths About NFV (Mostly) Dispelled

The transition toward network functions virtualization (NFV) is in progress and, as with any technology transition, companies are proceeding with caution. The trick is figuring out which anticipated hazards are real, if any of those have already been cleared, and making sure you don't get frozen by hazards you anticipated but which failed to materialize.

Make no mistake, the course ahead is tricky. Evangelists for new technologies can get overly enthusiastic, envisioning everyone at the finish line before anyone has cleared all the technological barriers. And then there are those companies who take off without surveying the course ahead -- they're the ones that bumble into hurdles they could have anticipated if they'd simply prepared better.

At the same time, companies can be too hesitant to explore their options, based on suspicions that appear reasonable but aren't merited by the actual situation.

Based on its experience with customers, Luxoft, which provides development, test and evaluation services to vendors and operators, keeps coming across ten commonly held myths about NFV. This article is adapted from a presentation made by Luxoft executive director Tomy Issa, and comments he made in a subsequent conversation with Light Reading.

Issa has served stints with Allied Telesis, Nortel, and most recently with the operations of Tektronix, Fluke and Arbor Networks that are now owned by Netscout.

MYTH 1: Existing applications become "NFV ready" when the software is ported to run on a virtual machine

"We have heard that from both established vendors and startups," Issa lamented.

Porting code running on some piece of specialized hardware to run on an x86-based virtual machine (VM) is possible, but it emphatically does not automatically make that virtual app NFV ready. Luxoft said it worked with one company that tried to migrate software composed of a few million lines of code to run on a VM, only to find out that the resulting virtual appliance consumed huge amounts of CPU and memory, and was simply not scalable.

The best way to build a carrier-grade virtual network function (VNF) is to take a ground-up approach, starting with a purposefully designed modular architecture that addresses performance, scalability and other important requirements, Luxoft recommends.

Traditional carrier-grade design was rigorous, sometimes including design for manufacturing and design for testability. Successful design of VNFs should adopt those same methodologies and augment them with a new one: design for virtualization.

Design for virtualization includes a new set of design attributes such as:

  • Modularity (re-usable; optimizing memory usage)

  • Multi-threading (optimized for multi-core parallel processing)

  • Multi-Tenancy (optional: for SaaS)

Other considerations include

  • Ease of installation and turn-up

    • Real-time performance

    • Time required to allocate resources to scale up or release of assets when scaling down a required service or function

    • A VNF developer who adopts this approach and these methodologies will be able to demonstrate to prospective customers that they will be able to scale up, evolve the VNF, and grow their virtual environments with confidence that interoperability will not be an issue.

      MYTH 2: Virtual appliances cannot perform as well as their physical counterparts

      This is one of those things that seems a reasonable assumption but -- even with the current state of technology -- is not true, Issa asserts.

      He cites several examples, including Brocade Communications Systems Inc. (Nasdaq: BRCD), which recently demonstrated a VNF with 40 Gbit/s throughput. He adds, "I met with a number of startups with virtual appliances, such as carrier-grade NATs and BGCs, that have proven performances comparable to their counterparts in physical appliance world."

      One of the key enablers for the ability to build high-performing virtual applications is the data plane development kit (DPDK), introduced by Intel in 2012. It has become a common acceleration platform for most hypervisors, Issa said.

      There are also several coordinated efforts. One such is Enhanced Platform Awareness (EPA) led by Telefónica , and aimed at building frameworks for end-to-end services comprising multiple virtual appliances with carrier-level SLAs. The early results are "quite encouraging," according to Issa.

      Issa allows that not all network functions can be virtualized. As a rule of thumb, he says, 80% of the services can be performed in an NFV environment while 20% of the services cannot.

      Myth 2 has a natural corollary in Myth 3.

      MYTH 3: Mission-critical network services should still be run only on dedicated physical appliances

      Even if you know that the performance of virtual appliances can match that of dedicated machines, you might still suspect that the performance of virtual appliances cannot be guaranteed.

      Not necessarily so. Even if you have a service that is highly sensitive to packet delay or packet loss, there are toolkits and methodologies available that can be used to bypass the VM to produce a version of the physical machine.

      One of those widely adopted methodologies or techniques is called non-uniform memory access (NUMA), which can achieve sub-millisecond delay performance for real-time use cases.

      "We can identify pools of resources based on some characteristics and group them together, group processors together based of threads, locate and eliminate delay contributors. That requires a good background in software engineering skills, but it is doable," Issa says.

      Another technique is single-route IO virtualization for PCI-X, which allows developers to bypass the host virtual switch and to connect directly to the NIC. In practice, it demonstrates almost 100% of bare metal performance.

      Next page: Making money from NFV and network visibility

      MYTH 4: NFV use cases have not been monetized

      This argument has been undercut by plentiful recent examples of commercially successful services such as virtual evolved packet core (EPC) and virtual customer premises equipment (CPE).

      Nearing is a benefit that will improve the equation for monetization: as virtualization accelerates, operators will be able to factor out large capital expenditures.

      Verizon, for example, proposes a new business model in which its vendors share the risk in the introduction of new services. If a service succeeds everyone will make money; if it fails, everyone shares in the risk. If other operators adopt the approach, it implies a transition for equipment vendors from a capital-intensive revenue model to an annuity model.

      Furthermore, virtualization lends itself to usage billing models, not only for consumer services but for business-to-business services, and Luxoft recommends preparing for that transition.

      "This is a real indicator that monetization of the NFV ecosystem is happening as we speak: it is changing the way the service providers will bill their users and they want all NFV vendors to support metering end-to-end in an attempt to do reward and risk sharing," according to Issa.

      Where's the evidence for that? Luxoft says its customers are asking it to support metering "and we know where these requests are coming from. So, the longer you wait to adopt this new requirement the bigger the revenue dip," he added.

      He cited several examples that usage billing is becoming more common. Three years ago Splunk Inc. implemented its service based on the size of the data a given user has indexed; recently a Tier 1 European operator equipped its data centers with the ability to do analytics on-demand for its key enterprise customers and billing is done based on usage; a global test and measurement vendor allows its strategic customers to set up and tear down test resources on demand from its private cloud, thus eliminating upfront capex investments for those customers who prefer to expense such services.

      "So, what does this mean to your revenue as an equipment vendor as you transition from a capital-intensive revenue model to an annuity model? Answer: the longer you wait to adopt the new model the worse your revenue will dip to adjust ... We recommend that you jump on the bandwagon today so that your transition in revenue is graceful, thus minimizing surprises to Wall Street," Issa continued.

      MYTH 5: End-to-end network visibility is not possible because of some parts of the network are beyond the direct control of the service provider

      While it is true that certain parts of the network are closed systems controlled by small handfuls of vendors (e.g., the 4G/LTE edge, the optical core), it is possible to get that visibility -- if not control -- through open APIs.

      Developing open APIs has been a gradual process but Luxoft believes it is inevitable, and that as it happens, more and more performance data will become available to network operators.

      Examples of Open APIs that are based on open standards are RESTFUL / XML, NetConf/Yang, OpenFlow. There are also other open platforms that are developed by the operators, such as Telefonica's openVIM.

      There are two types of APIs, Issa notes -- read and write.

      Read APIs allow a third-party to extract information from the vendor, but limited to the information the vendor makes available.

      Write APIs sometimes (aka northbound interfaces) allow third parties to control and manage the vendor's system, though the vendor can still limit the amount of control.

      MYTH 6: Virtualized networks are impossible to manage

      This may be the case now, but it is not an existential problem. It is possible to manage virtual networks, but the solution is dependent on the development of technologies for software-defined networking (SDN).

      "NFV will need SDN, otherwise, the NFV ecosystem becomes clogged," Issa says.

      The busy schedule of interops and plug-fests is evidence that the industry is moving in the right direction.

      Meanwhile, OpenFlow has been in the works for quite some time and is providing now some hierarchical control and communication between layers, though Issa says it remains a question if that is going to be enough.

      The OSS for what Luxoft calls a purely elastic network is yet to be defined, but Issa notes there is no lack of innovation in this ecosystem. Solutions might come out of the TM Forum or they might come in the form of disruptive OSS subsystems offered by some new startups. Either way, total control of the network is going to be a necessity.

      Next page: The role of standards

      MYTH 7: Standards are necessary but not sufficient

      With all those multiple standard groups, alliances and task forces popping up basically every quarter and with all those proposals aimed at adoption of new standards that are trying to address gaps in the previous standards, it is reasonable to ask if we will ever end up with something we can rely on?

      We are quite optimistic about the future of NFV-related standards. There is apparently some convolution going on. NFV doesn't work by itself. It has long standing partnerships around the current ETSI MANO (identifies the NFV Ecosystem and its building blocks).

      They work on all the reference points; third-party SDOs (standards development organizations), such as, the TM Forum, SDN, DMTF, LSO -- they all are building a way of orchestrating the services across vendors on-demand.

      Who watches the show Survivor on TV? They just concluded their 18th season. So even if you don’t watch it, you probably know about it. Well, I'll let you in on a secret. The survivors make alliances to survive for a while, then some alliances break and others form, and this goes on until three survivors are left, then each is on their own. In the end there will be just one left and they get the title of sole survivor and they win a million dollars. Well, maybe two and the first runner-up wins $100K. Similarly, in the NFV ecosystem, since we know from myth #1, that NFV will have a major impact on the network and the way we do business. These standards groups are not contained in one SDO, but will be reaching into OSS, BSS, NMS, the NOC, the Network Edge, the network core and the data centers both public and private. We do not know which of the SDOs will survive but I am sure that every one of them is a welcome addition to the NFV ecosystem and only time will tell which ones not only survive but emerge as the winners in the next two to four years.

      MYTH 8: Open source solutions are not ready for prime time

      Open source is a factor in delivering on the promise of virtual appliances: the ability to implement technology for a fraction of cost of versions based on proprietary closed platforms.

      Red Hat for Linux, Hortonworks and Cloudera Inc. for Hadoop, are among the most prominent examples that put the myth to rest. There are others, including several open source controllers being considered for commercial deployment.

      The key to developing more will be adopting the types of rigorous design methodologies mentioned earlier.

      Vendors worry about trying to compete with a product lacking a proprietary innovation. The key to succeeding with open source systems is that the "secret sauce" they have to sell is performance -- "their ability to allocate, scale, manage, replace and monitor VNF resources, on-demand," Issa contends.

      "For those CIOs who do not want to be locked into one supplier, there is a growing number of new vendors of next-generation service assurance and data analytics tools that provide the necessary data to shed light on the dark areas of the network and make use of the open source platforms and modules," Issa adds.

      MYTH 9: You cannot do quality assurance on a virtualized network

      Test functions are still often performed serially -- product QA, system test, deployment on the network, go live -- and every step takes several months. Luxoft refers to this as the waterfall model. If a network operator is virtualizing its network and it is still following the waterfall model, then no, you cannot do QA on a virtualized network.

      Test must be built right into the process, because QA is going to have to be done continuously. If you do that, then yes, you can in fact do QA on a virtualized network.

      One rationale for virtualizing is reducing cost, but another key advantage is network flexibility -- what some refer to as the "elastic network." When you release and instantiate new services all the time, every instantiation is tantamount to a new service.

      Say an operator develops a VNF -- that software should be tested on every server (e.g. Dell, Lenovo, HPE, etc.). It should be tested in each network configuration. Each time that software is changed, it should be tested through all those configurations -- the amount of testing that has to be done can quickly get unwieldy.

      "Each different server is a variable. You want to do it using Linux? That's another variable. You want to put some code on top? More variables. You can quickly get to ten, 40, 160 configurations," Issa told Light Reading.

      Which leads into the next myth…

      MYTH 10: Multivendor interoperability testing on elastic networks is a nightmare

      False from the get-go, Luxoft asserts. The solution is conformance testing.

      Hence the explosion of plug-fests, interops, and organizations hosting those events. (Light Reading is affiliated with the leading industry initiative that performs independent tests, The New IP Agency .)

      — Brian Santo, Senior Editor, Components, T&M, Light Reading

Read more about:


About the Author(s)

Brian Santo

Senior editor, Test & Measurement / Components, Light Reading

Santo joined Light Reading on September 14, 2015, with a mission to turn the test & measurement and components sectors upside down and then see what falls out, photograph the debris and then write about it in a manner befitting his vast experience. That experience includes more than nine years at video and broadband industry publication CED, where he was editor-in-chief until May 2015. He previously worked as an analyst at SNL Kagan, as Technology Editor of Cable World and held various editorial roles at Electronic Engineering Times, IEEE Spectrum and Electronic News. Santo has also made and sold bedroom furniture, which is not directly relevant to his role at Light Reading but which has already earned him the nickname 'Cribmaster.'

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like