Imagine two people on a long road trip. One of them is very excited because he believes he has figured out a shortcut that will cut days from the journey.
But the other traveler disagrees. The problem, she says, isn't the route. The problem is the destination. They're driving off a cliff.
That's kind of the situation faced in the NFV community today. On the one hand, a group of operators calling themselves the Common NFVI Telco Taskforce (CNTT) -- including some of the biggest names in the industry, such as AT&T, Verizon and Telefónica -- believes it can rapidly speed up NFV adoption by standardizing on a few, simple infrastructure configurations.
However critics say the CNTT is solving the wrong problem. Critics say NFV is fundamentally broken, and fiddling around with specifications and standards isn't going to solve that problem.
The case for simplifying NFVi
The network functions virtualization infrastructure does what the name says: It provides the infrastructure for NFV.
Virtual Network Functions (VNFs) -- like firewalls, VPN gateways and virtual Customer Premises Equipment (vCPE) -- run on top of the NFVi.
The NFVi and VNFs can come from different vendors, and VNFs can still run on the NFVi, so long as they comply with specific standards, in the same way that Android apps from multiple vendors can run on different vendors' phones.
The Linux Foundation's Open Platform for NFV (OPNFV) organization has been developing standard configurations for NFVi and got a little overenthusiastic. They developed 60. That's too many for carriers to keep track of. And so the Common NFVi Telco Taskforce (CNTT) formed in February to simplify the number. The GSMA came on as host of the CNTT later, in collaboration with the Linux Foundation and OPNFV.
The initial proposal discussed three configurations -- one for network-intensive applications, one for compute-intensive applications, and a third for everything else, including IT workloads. The CNTT now says the number is undecided, but it will be fewer than ten.
In addition to simplifying the number of configurations, the CNTT will develop OPNFV certification for VNFs on reference architectures.
Importantly, telcos are in the driver's seat, says Heavy Reading analyst James Crawshaw, who has been following the process closely. "You've got this chaotic situation today where, with the OPNFV project and ETSI NFV, their roadmaps are being dictated by the vendor community," Crawshaw says. "The group has been taken over by vendors because the operators do not have enough knowledgable people to take part."
The CNTT and GSMA are operator-driven, Crawshaw notes. Ten operators were on board CNTT as of April, and they are a roster of the industry's heavy hitters: AT&T, Bell Canada, China Mobile, Deutsche Telekom, Reliance Jio, Orange, SK Telecom, Telstra, Verizon and Vodafone.
"CNTT is finishing the job OPNFV should have done," Crawshaw says.
But Crawshaw's favorable analysis isn't universal.
Going the wrong way
Critics say the GSMA is addressing the wrong problem. The NFVi isn't the problem. The problem is that NFV is too monolithic. NFV was designed to replace hardware appliances for network functions, but it reproduces the hardware's problems in software, critics say.
Modern networks need cloud software to meet the today's business and consumer demands for rapid deployment, scalability, flexibility and robustness. NFV needs to be replaced and rewritten with a more agile cloud infrastructure, suitable to the demands of emerging applications like 5G, say critics.
"Monolithic and cloud don't go together," says telecoms consultant Tom Nolle, president of CIMI Corp. and a longtime critic of the current direction of NFV. "If you want something to be elastic and cloud-ready, you have to break it into components."
In an era when software is moving to lightweight, containerized architectures that break down big applications into tiny microservices, NFV depends on heavyweight applications that run in virtual machines. Because of that architecture, NFV replicates the problems of vendor-dependent, hardware-based networks, even though NFV is software, Nolle says.
The monolithic architecture handicaps carrier efforts to move to cloud networks.
Nolle says the problem is that NFV is fundamentally broken. "It's never been on the right path and it's not going to be on the right path at this point," he says. NFV is designed from the bottom up, starting with functional diagrams. Instead, telco network software -- like modern software -- should be designed from the top down, starting with requirements and building downward to individual components, Nolle says.
A better architecture would be to focus on the desired outcome, which can then be implemented as virtual machines, containers, or in serverless cloud computing, depending on network needs. "Every process can make that decision individually," Nolle says. "But if I write MANO [Management and Network Orchestration] as a box rather than representing the logic, I have just written a monolithic piece of software that can be implemented in one specific way. I can't separate the process."
Because of the monolithic architecture, carriers have difficulty mixing and matching components from multiple vendors -- which was a major point of NFV when an alliance of Tier 1 carriers launched the architecture in 2012. VNFs from one vendor usually require NFVi from that same vendor.
To be sure, NFV has its success stories. For example, AT&T, Colt, Equinix and Turkcell have all recently discussed successful NFV deployments. Just this week, SES said it is implementing NFV on a satellite network for data communications.
But those successes are extremely limited, Nolle says. "If I get one guy ashore at Normandy, have I invaded?" Nolle says.
Next Page: The Path to the Public Cloud