What a difference a year makes! At last year's SDN and OpenFlow World Congress in The Hague, there were a number of presentations highlighting the fact that NFV was in the doldrums or trough of disillusionment, including my own presentation.
Since that time, there have been some major changes that might not be immediately obvious to everyone, but provide indications that we are starting to move in the right direction. Presentations at this year's event as well as Light Reading's "NFV & Carrier SDN" event are clearly showing that there are carriers making progress.
So, what's changed?
If we take a look at how the leaders in the NFV market are making progress, it is not because they have strictly adhered to the original 2012 blueprint. They have been willing to take another look and go back to the original well of inspiration, namely cloud service providers, and learn from their experiences.
The original 2012 blueprint for NFV was inspired by what the cloud service providers were doing at that time with COTS hardware and virtualization. But, since that time, cloud service providers have continued to evolve and explore and to innovate to address some of the same challenges faced by those trying to implement NFV over the last five years. Leading carriers and vendors have been keeping note.
One of the major areas of concern last year was orchestration. Telefónica's OpenMANO and AT&T's ECOMP, which is now part of ONAP, and its influence on the MEF Lifecycle Services Orchestration (LSO) framework have now provided leading orchestration implementation examples. This is a major achievement, as orchestration and the associated concept of automation are new to telecom environments. Other carriers will rely on the MEF LSO framework, ONAP and the experiences of leading carriers like Telefónica and AT&T (not to mention their vendors) to plan their own implementations.
One of the more direct influences from the cloud services world on NFV thinking has been DevOps, decomposition, microservices and the associated use of containers. "Cloud-native solutions" is a term being used to describe how software functionality should be implemented using these concepts with vendors like Intel and Nokia promoting solutions. In addition, there are discussions on intent-based networking and even Amazon Lambda (aka serverless computing) or functional programming. In other words, we are being inspired, once again, by the experiences of cloud service providers to rethink how NFV should be implemented. And that's a good thing.
But what about the NFV infrastructure? Where is the inspiration from the cloud services providers in NFVi thinking? Unfortunately, many are still doggedly adhering to the 2012 NFV blueprint despite the fact that there are clear issues with regards to performance, flexibility and manageability in NFV infrastructure implementations today. The blueprint I am referring to is the use of standard computing plaforms based on generic CPUs and standard NICs and open virtualization software.
During the conferences, it was clear to see those who have had some experience of deploying NFV infrastructures and those who have not. Those who have not believe that software decomposition and microservices will be enough to assure performance and can compensate for any issues in standard computing platforms. Those with experience know that this will not work.
While I admit that the major challenges in NFV are at the orchestration and VNF layers and enabling automation is a top priority, I would still contend that without the right NFV infrastructure none of the above is practically feasible.
So, can we get any inspiration from cloud service providers? Unsurprisingly, we can.
Next page: Getting reconfigurable