In the golden age of seafaring, every sailor feared getting "caught in the doldrums." The doldrums was a colloquial term for the low pressure areas of the Atlantic and Pacific around the equator that often experienced periods of no wind for several days or weeks. If your sailing ship was caught in the doldrums, you were going nowhere.
NFV, right now, feels like it's in the doldrums. There is an air of disillusionment that NFV hasn't taken the world by storm as quickly as many had hoped. But is that assessment justified? Haven't we achieved a lot already? Aren't we making progress?
The major concern for carriers is that the business case for NFV will not hold. The first round of NFV solutions that have been tested have not delivered the performance, flexibility and cost-efficiencies that were expected by some carriers. This has raised doubts in the minds of some on whether to pursue NFV or not.
But do carriers really have a choice?
According to Tom Nolle at CIMI Group, they don't. Based on input from major carrier clients, Nolle found that the cost-per-bit delivered in current carrier networks is set to exceed the revenue-per-bit generated within the next year. There is an urgent need for an alternative solution and NFV was seen as the answer. So, what's gone wrong?
Since the original 2012 NFV whitepaper, there has been a rush of activity and enthusiasm of Klondike dimensions. Everyone was staking their claim in the new NFV space, often retro-fitting existing technologies into the new NFV paradigm. Using an open approach, tremendous progress was made on proof-of-concepts with a commendable focus on experimentation and pragmatic solutions that worked rather than traditional specification and standardization. But, in the rush to show progress, we lost the holistic view of what we were trying to achieve; namely to deliver on the promise of NFV of high performance, flexible and cost-efficient carrier networks. All three are important, but achieving all three at the same time has proven to be a challenge.
Take the NFV infrastructure (NFVi) as a case in point. Solutions, such as the Intel Open Network Platform, were designed to support the NFV vision of separating hardware from software through virtualization, enabling any virtual function to be deployed anywhere in the network. Using commodity servers, a common hardware platform could be provided that could support any workload. Conceptually, this is the perfect solution. Yet performance of the solution is not good enough. It cannot provide full throughput and it costs too many CPU cores to handle data, which means we use more of the CPU resources moving data than actually processing it. It also means a high operational cost at the data center level, which undermines the need for cost-efficient networks.
The source of the problem was deemed to be the Open Virtual Switch (OVS). The solution to the problem was to bypass the hypervisor and OVS and bind virtual functions directly to the Network Interface Card (NIC) using technologies like PCIe Direct Attach and Single Root Input Output Virtualization (SR-IOV). These solutions ensured higher performance, but at what cost?
By bypassing the hypervisor and tying virtual functions directly to physical NIC hardware, the virtual functions cannot be freely deployed and migrated as needed. We are basically replacing proprietary appliances with NFV appliances! This compromises one of the basic requirements of NFV -- the flexibility to deploy and migrate virtual functions when and where needed.
What is worse, such solutions also undermine the cost-efficiency that NFV was supposed to enable. One of the main reasons for using virtualization in any data center is to improve the utilization of server resources by running as many applications on as few servers as possible. This saves on space, power and cooling costs. Power and cooling alone typically account for up to 40% of total data center operational costs.
What we are left with is a choice between flexibility with the Intel Open Network Platform approach or performance with SR-IOV, with neither solution providing the cost-efficiencies that carriers need to be profitable. Is it any wonder that NFV is in the doldrums?
So, what is the answer? The answer is to design solutions with NFV in mind from the beginning!
While retro-fitting existing technologies can provide a good basis for proof-of-concepts, they are not finished products. However, we have learnt a lot from these efforts; enough to design solutions that can meet NFV requirements. By designing for NFV from the beginning, it is possible to provide solutions that can meet all NFV requirements. For example, there are now solutions that can accelerate OVS with high-performance throughput and latency as well as low CPU usage without the need to use SR-IOV. This is just one example of a wave of new technology solutions designed specifically for NFV that are emerging and are required for NFV to move to the next stage in development.
There is no technical reason why the promise of NFV cannot be achieved. What could be lacking is the belief that this is possible. Even in the doldrums, the wind eventually filled the sails, so we must not lose heart. Carriers, in particular, must continue to press for the right solutions as they have an urgent need to establish a new operational cost-trajectory as well as the basis for new solutions. What is the alternative?
— Dan Joe Barry, VP of Positioning and Chief Evangelist, Napatech