& cplSiteName &

Is NFV in the Doldrums?

Dan Joe Barry
9/12/2016
50%
50%

In the golden age of seafaring, every sailor feared getting "caught in the doldrums." The doldrums was a colloquial term for the low pressure areas of the Atlantic and Pacific around the equator that often experienced periods of no wind for several days or weeks. If your sailing ship was caught in the doldrums, you were going nowhere.

NFV, right now, feels like it's in the doldrums. There is an air of disillusionment that NFV hasn't taken the world by storm as quickly as many had hoped. But is that assessment justified? Haven't we achieved a lot already? Aren't we making progress?

The major concern for carriers is that the business case for NFV will not hold. The first round of NFV solutions that have been tested have not delivered the performance, flexibility and cost-efficiencies that were expected by some carriers. This has raised doubts in the minds of some on whether to pursue NFV or not.

But do carriers really have a choice?

According to Tom Nolle at CIMI Group, they don't. Based on input from major carrier clients, Nolle found that the cost-per-bit delivered in current carrier networks is set to exceed the revenue-per-bit generated within the next year. There is an urgent need for an alternative solution and NFV was seen as the answer. So, what's gone wrong?

Since the original 2012 NFV whitepaper, there has been a rush of activity and enthusiasm of Klondike dimensions. Everyone was staking their claim in the new NFV space, often retro-fitting existing technologies into the new NFV paradigm. Using an open approach, tremendous progress was made on proof-of-concepts with a commendable focus on experimentation and pragmatic solutions that worked rather than traditional specification and standardization. But, in the rush to show progress, we lost the holistic view of what we were trying to achieve; namely to deliver on the promise of NFV of high performance, flexible and cost-efficient carrier networks. All three are important, but achieving all three at the same time has proven to be a challenge.

Take the NFV infrastructure (NFVi) as a case in point. Solutions, such as the Intel Open Network Platform, were designed to support the NFV vision of separating hardware from software through virtualization, enabling any virtual function to be deployed anywhere in the network. Using commodity servers, a common hardware platform could be provided that could support any workload. Conceptually, this is the perfect solution. Yet performance of the solution is not good enough. It cannot provide full throughput and it costs too many CPU cores to handle data, which means we use more of the CPU resources moving data than actually processing it. It also means a high operational cost at the data center level, which undermines the need for cost-efficient networks.

The source of the problem was deemed to be the Open Virtual Switch (OVS). The solution to the problem was to bypass the hypervisor and OVS and bind virtual functions directly to the Network Interface Card (NIC) using technologies like PCIe Direct Attach and Single Root Input Output Virtualization (SR-IOV). These solutions ensured higher performance, but at what cost?

By bypassing the hypervisor and tying virtual functions directly to physical NIC hardware, the virtual functions cannot be freely deployed and migrated as needed. We are basically replacing proprietary appliances with NFV appliances! This compromises one of the basic requirements of NFV -- the flexibility to deploy and migrate virtual functions when and where needed.

What is worse, such solutions also undermine the cost-efficiency that NFV was supposed to enable. One of the main reasons for using virtualization in any data center is to improve the utilization of server resources by running as many applications on as few servers as possible. This saves on space, power and cooling costs. Power and cooling alone typically account for up to 40% of total data center operational costs.

What we are left with is a choice between flexibility with the Intel Open Network Platform approach or performance with SR-IOV, with neither solution providing the cost-efficiencies that carriers need to be profitable. Is it any wonder that NFV is in the doldrums?

So, what is the answer? The answer is to design solutions with NFV in mind from the beginning!

While retro-fitting existing technologies can provide a good basis for proof-of-concepts, they are not finished products. However, we have learnt a lot from these efforts; enough to design solutions that can meet NFV requirements. By designing for NFV from the beginning, it is possible to provide solutions that can meet all NFV requirements. For example, there are now solutions that can accelerate OVS with high-performance throughput and latency as well as low CPU usage without the need to use SR-IOV. This is just one example of a wave of new technology solutions designed specifically for NFV that are emerging and are required for NFV to move to the next stage in development.

There is no technical reason why the promise of NFV cannot be achieved. What could be lacking is the belief that this is possible. Even in the doldrums, the wind eventually filled the sails, so we must not lose heart. Carriers, in particular, must continue to press for the right solutions as they have an urgent need to establish a new operational cost-trajectory as well as the basis for new solutions. What is the alternative?

— Dan Joe Barry, VP of Positioning and Chief Evangelist, Napatech

(6)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
fhaysom
100%
0%
fhaysom,
User Rank: Light Beer
9/13/2016 | 5:18:26 AM
Its a journey
Great article. 

Probably the biggest issue with NFV is that there is an expectation of an arrival and end point rather than recognising that NFV is a continuing journey. Carriers have always balanced the needs of performance, flexibility and cost. Virtualization changes the balance point between these three but it still does not give 100% for all three. We need to recognise that there will continue to be choices made by carriers between performance of HW and the flexibility of virtualization; between the cost of centralised cloud and the performance of local distributed processes. These will change over time with the evolution of new cloud capabilities and the evolution of new HW capabilties. The challenge for the carriers is how to provide an operational environment that allows the combination of flexible cloud network solutions and performant hardware network solutions. That solution needs to leverage not only the best of cloud IT orchestration but also the best of today's OSS.
danjoe
50%
50%
danjoe,
User Rank: Blogger
9/12/2016 | 8:54:52 AM
Re: NFV? WIP....
Hi Virtual_Robert

I totally agree. I am a fan of Tom Nolle who I find to be well informed and very pragmatic about these things. I often thought that, if all else fails, then at least the NFV work and introduction of orchestration would solve the issues related to managing and delivery services in telecom networks. But, in order for this to work, you need a NFV infrastructure that doesn't impose too many restrictions, exceptions or other requirements that would make the orchestration complex and error-prone. Hence, the need for a common, generic hardware platform that is software agnostic. It is possible, you just need to start from this angle rather than from the bottom up. 
Virtual_Robert
100%
0%
Virtual_Robert,
User Rank: Moderator
9/12/2016 | 8:31:17 AM
NFV? WIP....
Nice article, Dan!

One way or another, telcos need to find a way to deliver on the business case for NFV. And the holistic view is the one that matters. If the big picture isn't changed by NFV, then we're diong it wrong. Replacing (hardware) appliance with (equivalent sofware) appliance will have limited imapct if surrounding processes (from planning to assurance) aren't also altered (and almost certainly retooled). 

Since you quoted Tom Nolle, I like how he summarizes it: "...the logical approach is to use an NFV-compatible approach to orchestrate infrastructure and network processes, and enhance NFV as needed to make that work."
sarcher60555
50%
50%
sarcher60555,
User Rank: Lightning
9/12/2016 | 7:50:47 AM
Re: Performance trade-off
My monthly bill with Mobile Operator is down by 84% in past 3 years.  I easily use WhatsApp/Vine/Signal 5 times more than traditional GSM call.  Sent a text, last one was in 2013?  With my huge amount of data for US and EU roaming, my bill with heavy usage is down dramatically since 2012 peak levels.  More often I just seek out a WiFi Hotspot and do my calling from that point instead of the 3G 4G Networks.
danjoe
100%
0%
danjoe,
User Rank: Blogger
9/12/2016 | 6:37:49 AM
Re: Performance trade-off
Hi James

In general, I am a believer that the solution intelligence should be implemented in software as much as possible, as this is the most flexible and agile solution. However, there is a limit to what is acceptable from a CPU core usage point of view and that is where we need to think about other alternative solutions such as accelerators. These can be introduced without compromising the overall flexibility and agility of the solution, if you just think this in from the beginning rather than solely focusing on performance alone as the end-goal. 

 
James_B_Crawshaw
50%
50%
James_B_Crawshaw,
User Rank: Blogger
9/12/2016 | 6:09:28 AM
Performance trade-off
I think you've captured one of the key challenges of NFV really well, namely the additional processing overhead. Is the solution accelerators for x86 processors or is it smarter hypervisors and associated virtualisation software?
More Blogs from Column
Fresh survey results show that the distributed enterprise market is up for grabs as cable operators, traditional telcos and wireless providers vie for advantage.
Operators are challenged with finding a tech-agnostic approach and a data-driven upgrade strategy that will stand the test of time.
While sports content piracy is growing at an alarming rate, hurting the industry and dampening down the value of sports TV rights, solutions are available.
Rather than ignoring password sharing or solely seeking ways to prevent it, service providers are now in position to monetize it thanks to advances in behavioral analytics and machine learning.
Cloud service providers and network operators have the scale and talent to help protect enterprises from state-sponsored hacking attempts.
Featured Video
Upcoming Live Events
September 17-19, 2019, Dallas, Texas
October 1-2, 2019, New Orleans, Louisiana
October 10, 2019, New York, New York
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
December 3, 2019, New York, New York
December 3-5, 2019, Vienna, Austria
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events