First-gen virtual network functions were still vertically integrated but combining next-gen disaggregation with open source will pay off.

October 2, 2017

6 Min Read
Anschutz: Next-Gen NFV Actually Saves Opex

DENVER -- NFV & CARRIER SDN -- The second generation of virtualized network functions is going much further in unbundling common elements and that is enabling operations savings and efficiencies, AT&T's Tom Anschutz said at last week's conference. AT&T is pushing the envelope more on that front, developing open hardware modules needed to use this new level of functionality in the Central Office and access network.

Most recently, according to Anschutz, who is a distinguished member of technical staff at AT&T Inc. (NYSE: T) Labs, the network operator earned Open Compute Project approval for its XGS-PON optical line terminal, and it is now working on similar options for Gfast technology.

The shout-out to open hardware was significant but Anschutz's keynote was repeatedly referenced throughout Thursday's program by other service providers for having accurately called out the carriers' problems with early VNFs, namely that they were essentially just virtualized versions of the integrated hardware-software used before.

That was fine to get to market quickly with some savings -- hardware appliances were replaced by commercial off-the-shelf servers, for instance -- but it didn't achieve the operations efficiencies and savings that AT&T is seeking because this approach failed to break functions apart into re-usable pieces, Anschutz said. Instead, makers of physical network functions just ported their software onto a COTS box, maintaining the vertical integration and the ties to vendor-specific element management and network management systems, and traditional OSSs.

Figure 1: AT&T's Tom Anschutz presents a keynote at last week's NFV & Carrier SDN event in Denver. AT&T's Tom Anschutz presents a keynote at last week's NFV & Carrier SDN event in Denver.

As a result, as other service providers agreed, NFV and SDN aren't yet delivering on their real promise.

"I agree 100% with what [Anschutz] said -- that first generation of VNFs that have come out haven't driven the real value-add for SPs," said Jeff Brown, director, product management and marketing for Windstream, in an afternoon presentation. He was one of several service provider speakers to reference Anschutz's detailed talk.

Disaggregation needed
The AT&T DMTS cited network gateways as an example of a first-generation VNF. While most of what that these devices do is common, regardless of whether the service is broadband wireline or wireless, the early virtualized gateways "are still really specialized and not interchangeable, even though most of the functions in there are really repeated from one to the next," Anschutz said. Like other first generation VNFs, the gateways are still managed by existing element management and network management systems, which remain vendor-specific.

"So just doing the virtualization without re-architecting the software is not helping all that much," he said. "We are doing the initial investment and we are looking for a payout that is going to allow us to get simpler, more easy-to-compose software but in this first step we've added the complexity of managing virtual infrastructure [and] we haven’t pulled out the complexity of those vertically integrated components and systems. "

The VNFs are still very stateful and managed like pets (versus cattle), he added. In addition, there is a lack of common framework for telecom workloads, Anschutz said, even though many of the forwarding and control plane functions are quite common.

"There is this need for frameworks within the telecom industry, that provide a meeting point between NFV infrastructure and VNF software," he said. That would allow innovation on the hardware underneath, separate of the software evolution at higher layers.

What AT&T is doing, using open source hardware specs and software as much as possible, is shifting away from integrated blocks of software to viewing network appliances as peripherals, much like printers are to PCs, and moving the intelligence to control those peripherals into the cloud, where it can be consumed on-demand as needed without requiring the operations expense of pre-provisioning, Anschutz explained.

For example, the network gateway devices would consume virtualized common elements such as routing and switching, and combine those with a more service-specific subscriber management system software, as needed, he explained.

"So subscriber management would change between wireless and wireline but routing and forwarding are routing and forwarding no matter what you are applying it to," Anschutz said. "Because these are then simpler patterns, I can have them as common subsystems. And those patterns I am going to design to be cloud-native, which means they are going to be elastic, they are not going to have just one chunk on a server with one set of performance characteristics but rather the function is going to understand the load it is trying to provide and it will either consume more resources or curtail, and consume less resources, depending on the current load."

Stronger CORDs
Another example Anschutz used involved AT&T's work with CORD and ON.lab in distributing its IT infrastructure closer to the edge of the network. As part of that work, the operator broke down an optical line terminal -- once a vertically integrated vendor-specific element -- into a peripheral device, mapped onto existing servers and server functions.

"The standard high-volume servers are able to support the management code and control plane, the sort of native function of the box," he said. For the common backplane, into which line cards will be plugged, AT&T used an Ethernet interface into a common fabric because that’s something its operations already understands. The one key thing missing was the physical layer -- the IT world hadn't devised any MAC devices for passive optical networks, so that's the thing AT&T created on its own, using open spec hardware.

"When I look at those boxes, I can start making new assumptions, I don't think of them as vertically integrated boxes," Anschutz said. "These are brand new elements-- they are intended to work with the cloud, they depend on the cloud, all by themselves, they are only a partial solution. This where I enter the term peripherals. These are no longer OLTs -- this is a peripheral to the cloud -- it is something you attach to NFV infrastructure that extends it to be able to do a new thing, just like you'd attach a printer to a computer to allow a computer to print."

As a peripheral, this new OLT device doesn't require firmware, it attaches to the cloud to find its operating system and configuration, he adds. This approach also allows things to scale out as needed, starting off with a high degree of over-subscription, for example, but adding resources as more subscribers come on board to deliver a higher level of support. The intelligence to do all this is scattered around multiple servers so that if any one fails, the service isn't impacted.

"When the subscriber comes, there is no opex already pre-provisioning, a customer shows up and their authentication shows what they are entitled to and the system assembles that in real time, basically," Anschutz said. There is no technician required to enter serial numbers for premises devices because they can be discovered and the device attached in a plug and play manner, all of which cuts operations.

With this second generation of NFV architecture, AT&T is moving closer to realizing the operational savings virtualization was supposed to deliver, he said. That was likely the part of the detailed presentation that had other operators talking for hours afterward.

— Carol Wilson, Editor-at-Large, Light Reading

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like