x
Cloud Native/NFV

Herding cats: The quest for 'standard' NFVi

I shouldn't have been surprised to hear a discussion about the challenges of delivering a standard NFVi during the recent Cloud Native World event, but I sort of was. Not that I expected the issue to have been resolved in the year since I left, but that it still looms so large indicates how significant a beast it is.

As most will be aware, one of the main drivers for network functions virtualization way back in 2012 was to simplify and standardize telecom infrastructure. Telecom operators were frustrated by vendor lock-in and the seemingly 1:1 relationship between network hardware and the applications and services that ran on it. More appealing from a capital and operational cost standpoint was to have a common platform to run all network functions. With virtualization and cloudification also came disaggregation. To a CSP, on paper, it sounds great: break up the stack, unleash innovation, light a fire under sleepy incumbents, put downward pressure on pricing – everyone wins.

Not exactly. At least not yet.

It turns out there was a very good reason to have vertical integration in the telecom equipment market. If you had control of each layer of the stack, you can optimize the final service or solution. When you have one company designing the chips, someone else doing the operating system, another developing applications, and yet one more trying to manage all of these products, the chances of everything working together seamlessly is pretty much impossible.

The Intel/Red Hat presentation discussed work being done to connect hardware drivers all the up to the application. This, of course, pulls us back towards the specific hardware for specific applications conundrum that got us here in the first place – which is why having an intelligent abstraction mechanism is so important. More reassuring was the diagram depicting the interrelationship between the applications ("exploit the cloud"), infrastructure ("facilitate delivery") and lifecycle management ("enable rapid innovation"). It's this last one that seemed to be an afterthought in the early days of NFV – and indeed was most likely the main reason deployments happened more slowly than anticipated.

Courtesy of Intel and Red Hat
Courtesy of Intel and Red Hat

In what I think is a sign of progress, rather than multiple companies working at each layer of the stack, we now have multiple open source projects working at (mostly) one layer of the stack. I should add that the thought that one combination of memory, accelerators, encryption, etc. could support the huge variety of telco workloads was never realistic (and believe me when I say that was a common assumption in the early days. I asked about it in a host of CSP surveys. By the end, most realized that a few platforms would be needed. More on this a bit later.)

Having a standard platform is not only about lower acquisition costs. Operators need common infrastructure to simplify operations – including backup and recovery. They want to limit possible permutations to minimize time to determine if the infrastructure can support a given application. Also, they want to be able to flex capacity up and down dynamically based on traffic. If there is going to be a chance of all of this happening automatically in (near) real-time, minimizing variability in the infrastructure will be crucial.

The Linux Foundation's Open Platform for NFV (OPNFV) was the industry's first attempt at developing a "standard" platform for NFV. The team made a valiant effort, but over the years, what it has emerged that what OPNFV really excels at is automated testing and CI/CD tasks required in coordinating releases from a variety of open source projects. It's not OPNFV's fault that it was trying to hit a moving target – what with VNFs having to work alongside CNFs in a world defined by non-telco hyperscalers. And as mentioned above, getting lifecycle management right is critical for cloudification to deliver on all its purported benefits, so their work remains just as, if not more, relevant than before.

In part, that is. In 2019, the Linux Foundation Networking and GSMA joined forces to form a new group to tackle infrastructure: The Cloud iNfrastructure Telco Taskforce (CNTT). (In May 2020, the official CNTT name was changed from "Common NFVI Telco Task Force" which you may see referenced in prior materials – good call. Even though I used it in this article's title, I concede that NFVI sounds as dated as my Yahoo email address.) According to its white paper, CNTT's "mission and objective of the Cloud iNfrastructure Telco Taskforce (CNTT) is to develop a framework for standardizing Cloud Infrastructure to increase interoperability between the virtualized workloads and the underlying infrastructure, and to allow for validation and conformance testing."

What this group delivers are a reference model (RMs), a limited number of Reference Architectures (RAs), each with its own Reference Implementation (RI) and Reference Conformance (RC) as a result of collaboration with members across the value chain, including software and telecom equipment providers, systems integrators and telecom operators. It published its initial common Reference Model and initial Reference Architecture in September 2019, and as of April 2020, 12 NFVi products had been badged as "NFVI" (and one as "VNF") by OPNFV's Verification Program (OVP).

The speakers from Verizon and Vodafone also discussed CNTT at length during their presentations at Cloud Native World. Having two of the world's largest operators openly and visibly supporting the project is a strongly positive indicator that it will continue to gain traction – and yes, there are other operators on board as well.

Unherd-able
'Gracie,' Phil Harvey's auxiliary backup cat
"Gracie," Phil Harvey's auxiliary backup cat

While all of this sounds very encouraging, I remain cautiously optimistic. I chose the phrase "herding cats" deliberately (and not just so Phil could post a cute cat photo). Because so many perspectives need to be considered and accounted for, the complexity of "standardizing" anything related to a telecom environment is daunting and not for the faint of heart. My sense is that the current efforts have taken to heart lessons learned in the early days of the virtualization journey, and are focusing on the right issues. It was always a stretch to think there would be a single right answer. If the industry can even get to a handful that can cost-effectively deploy new services that can be managed at a reasonable cost, it will have succeeded just the same.

Roz Roseboro, Consulting Analyst, Light Reading. Roz is a former Heavy Reading analyst who covered the telecom market for nearly 20 years. She's currently a Graduate Teaching Assistant at Northern Michigan University.

Be the first to post a comment regarding this story.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE