Network functions virtualization (NFV) may not be as straightforward as we think it is. Like many trending technologies that have come before, the devil is in the depths of the deployment details. In fact, many service providers today are experiencing the fallout of unforeseen consequences.
First, there are issues of production readiness. Interoperability, network element reliability, system integration, downtime levels, technology uncertainty and deployment risk are all closely associated with NFV at the moment. There are proposals in front of the standards bodies that are, at best, early incarnations of the current hardware-based network elements they will replace. We are seeing functions that have large hardware processing needs being transformed to software loads. This shift is new to the telecom industry and we do not know the final outcome of this change.
There are also issues of traffic visibility. When a workload is virtualized, the data associated with that workload may never leave the virtual machine. This presents major headaches for the IT operations manager (ITOM) to understand how the network is working. Even if it is working, there are traffic throughput issues, bottlenecks and a complete lack of ability to fault-find at the traffic level. The data has become obfuscated and this introduces risk into the network. After all, if you cannot see the data, you cannot actually secure it.
Not all subscribers are created equal, neither are handsets and neither are the app stores. For example, one subscriber may be prone to downloading apps that generate malware. To prevent such issues, operators need the ability to process traffic on a subscriber-by-subscriber, device-by-device or traffic-type-by-traffic-type basis.
NFV can also hinder processing. The well-known metric of processor workload is often used to understand how much capacity is used on a software platform. But when there are multiple layers of obfuscation, virtual machines and various virtual bursty workloads all interacting with one another, more capacity may be needed to reliably guarantee operation of all workloads, or servers processing that workload. More capacity may also be needed to support the variability of traffic and what traffic processing is needed. With today's virtual workloads, there is a lack of knowledge about how software-only based workloads will fare when the network is put through its full paces, and unpredictable capacity demands can wreck havoc on performance.
Another area of unforeseen consequence is the fact that software-defined networking (SDN) and NFV are not enough. Though these approaches are helping to reduce the capital and operational costs of running a modern service provider network, there is still the issue of subscriber cost. Not all subscribers are created equal -- why would an operator spend the same network resources on a subscriber with a very low ARPU versus a subscriber with an above average ARPU? Obviously, more network resources should be spent on the higher ARPU subscribers. Now that SDN and NFV have created an ability to reduce network costs to an absolute minimum, the next battleground is the optimization of the variable cost of the subscriber.
Pervasive network and traffic visibility are critical to get an idea of how the network is behaving and, more importantly, how the subscriber is interacting with the network. With deep insight of subscribers and their content, operators can make the correct decisions to appropriately dimension their networks, plan for outages and upgrade cycles.
By embracing these principles, not only can an operator de-risk their rollout of new technologies, they can also gain great agility, competitive advantage and new areas of cost reduction. Although NFV holds great promise, it's the start of the journey, not the end.
— Andy Huckridge, Director of Service Provider Solutions & SME, Gigamon Systems LLC