With wireless traffic growing a staggering 100,000% 2007-14, AT&T has to adapt.

Mitch Wagner, Executive Editor, Light Reading

July 6, 2015

7 Min Read
AT&T Describes Next Steps for Network Virtualization

John Donovan, head of AT&T's global network, recently described the company's next steps in its ambitious plans to achieve 75% virtualization and software control on its worldwide network, driven by a staggering 100,000% wireless growth between 2007 and 2014.

"While some competitors are still figuring out their SDN strategy, I want to talk about the next two phases of our SDN deployment," the AT&T Inc. (NYSE: T) senior executive vice president, technology and operations, said at a keynote at last month's Open Networking Summit.

The first phase is virtualizing AT&T's network functions, including the mobile packet core, session border controllers, load balancers, routers and firewalls. AT&T is taking these steps first to prove "that our vision and path are the right one," Donovan said.

Figure 1: Mic Drop AT&T's John Donovan wraps up. AT&T's John Donovan wraps up.

The second phase is disaggregation, Donovan said. "With this phase, we disentangle all the components in the system," strip them down to core components and rearchitect them for the cloud. "Abstracting is like pixie dust. It lifts everything. You don't even know what you can do until you can get it up."

The first target is AT&T's gigabit GPON OLT equipment in central offices for residential and business customers. These components are part of AT&T's GigaPower service. (See AT&T Testing Virtualized GPON, AT&T Testing Virtualized GPON, and AT&T: Building Gigabit Connections Is Just the First Step.)

Open source commitment
The equipment used for GigaPower is complex and expensive, which puts constraints on deployment, Donovan said. "This is exactly the area where SDN components can really shine." Virtualizing the system increases flexibility, reduces hardware consumption and enables faster scaling to put more functions in a box.

AT&T expects prototypes shortly, with trials and deployments scheduled for next year, Donovan said.

The carrier is creating open specifications for the equipment so that any ODM can build it, Donovan said.

Indeed, all of AT&T's network development is based on APIs, Donovan said. The company has 4,600 APIs, although many aren't for public consumption.

AT&T has a strong commitment to open source. "One tenet of open source is that you don't just take code. You contribute it as well," Donovan said. (See AT&T Makes Case for Open Source Sharing.)

Use secret sauce sparingly
While at the ONS conference, AT&T participated in a proof-of-concept for Central Office Re-architected as Data Center (CORD), along with chip vendors PMC-Sierra Inc. and Sckipio Technologies and the ONOS project, which is led byON.Lab. The PoC encompasses central office equipment (GPON and G.fast) and customer premises equipment (CPE). (See AT&T to Show Off Next-Gen Central Office.)

AT&T has developed a software tool to configure equipment using YANG, and has released the tool into open source through the OpenDaylight Project, Donovan says.

The company also contributed to the OPNFV Arno build, an open source NFV platform. (See OPNFV Formally Issues First Release, Arno.)

Making the transition to open source requires strategic thinking, Donovan said. Some code should remain proprietary -- but not a lot. "You have to define internally what is going to be open source and what will be your secret sauce," he said. "That sauce should be Tabasco size, not in gallon jars." (Editor's Note: I buy Tapatio Hot Sauce by the quart. But I get that Donovan is talking about normal people here.)

Technology that's heavily resource-intensive, takes a long time to develop and doesn't produce a lot of code is a candidate for development as a proprietary solution in a standards-based process. Other areas are better for open source. "This is an art, not a science. Everyone has a secret sauce," Donovan said.

Currently, 5% of AT&T's code is open source, with a target "north of 50%," Donovan said.

Open source is a different business model from how AT&T is used to working, and "fundamentally changes the relationship we have with suppliers," Donovan said. AT&T collaborates with suppliers and other outside organizations now far more than it has previously.

AT&T has already introduced SDN-based products, including Network On Demand, which went from idea to trials in six months, Donovan said. Network On Demand allows customers to increase or decrease network bandwidth as needed in real time. "That means they can use just what they need when they need it," Donovan said. Trialed initially in Austin, Network On Demand is available today in more than 100 markets "and it's getting rave reviews." (See SDN Powers AT&T's Rapid On-Demand Expansion.)

Next page: 'Audacious' changes

Donovan reviewed how the transition to open source and network virtualization has changed AT&T. "The term 'audacious' is how one blogger described it," Donovan said, adding that he was "gratified" by that description. (See AT&T Reveals Audacious SDN Plans.)

"AT&T was a very different company in 2008," Donovan said. Development cycle times were measured "literally in years, not in the months or even weeks that you find today."

AT&T was driven to change by mounting demand. "We had to approach innovation like this to meet the significant challenge we are facing," Donovan said. The network has seen 100,000% wireless growth between 2007 and 2014. Mobile data traffic surpassed mobile voice traffic in 2010.

Smartphones drove that demand, but mainly video, Donovan said. Video is the majority of network traffic. Total video traffic doubled in 2014. Wireline video is also booming. Also, Ethernet is taking off, and AT&T is migrating away from Time-Division Multiplexing (TDM).

Why not 100%?
"Throwing equipment at the problem isn't the answer," Donovan said. "It's simply not sustainable."

Previously, the industry built networks using a "specify, standardize and implement" approach, but that has proven too slow and cumbersome, Donovan said. Standards are important, particularly for regulated industries like aviation and medical. But communications providers need a more agile approach.

"Ultimately, our vision is based on two key concepts. First is SDN, next is NFV," Donovan said. "Instead of relying on specialized hardware for network functions, these concepts transfer heavy lifting to software."

AT&T has widely discussed plans to virtualize and control more than 75% of its network using cloud and SDN. In May, AT&T noted that 5% of that work will be done by the end of the year, relying extensively on open software. (See AT&T Touts Its First Virtualized Functions ).

"When we get to 75%, an astute question will be why not 100%?" Donovan said. But some apps are simply unsuitable for migrating to virtualization.

'Mass march'
Last year, AT&T merged its data center and network, and moved 60% of IT apps to the cloud, with a target of 100%. Moving IT to the cloud was educational, Donovan said. "If you take a 40-year-old mainframe app and move it to the cloud, you can get a lot of experience for virtualizing network functions."

This year, almost two thirds of AT&T's multi-tenant apps will be in the cloud, Donovan said.

Re-educating networking staff is key to the transition to virtualization and open source -- and it's a big job, Donovan said. Some 96,000 employees have registered for courses, and 56,000 people have earned badges in addition to their degrees. AT&T has changed IT and compensation, employee ratings and matches. "Our mass march is well on its way, as evidenced by those numbers."

Unsustainable economics
Another questioner picked up on AT&T's plans to transition from TDM. How can AT&T guarantee the quality of service needed for applications such as telemedicine or surgery? TDM provides the quality of service applications like that need.

"It's a thoughtful question -- I'm going to start by giving you a flippant answer," Donovan quipped. He said TDM requires overprovisioning the network to an extent that's economically unsustainable. SDN and NFV allow network providers to make much more efficient usage of their networks. "If I give you hardware reliability and the maximum load I put on the network is 40%, and someone on an OTT service can offer 90% usage, it doesn't matter because I won't be around to sell the service," Donovan said.

People requiring TDM-level reliability won't pay to achieve it in hardware. "You have to solve the problem in software, because it takes the network utilization up," Donovan said.

Related posts:

— Mitch Wagner, Circle me on Google+ Follow me on TwitterVisit my LinkedIn profileFollow me on Facebook, West Coast Bureau Chief, Light Reading. Got a tip about SDN or NFV? Send it to [email protected].

About the Author(s)

Mitch Wagner

Executive Editor, Light Reading

San Diego-based Mitch Wagner is many things. As well as being "our guy" on the West Coast (of the US, not Scotland, or anywhere else with indifferent meteorological conditions), he's a husband (to his wife), dissatisfied Democrat, American (so he could be President some day), nonobservant Jew, and science fiction fan. Not necessarily in that order.

He's also one half of a special duo, along with Minnie, who is the co-habitor of the West Coast Bureau and Light Reading's primary chewer of sticks, though she is not the only one on the team who regularly munches on bark.

Wagner, whose previous positions include Editor-in-Chief at Internet Evolution and Executive Editor at InformationWeek, will be responsible for tracking and reporting on developments in Silicon Valley and other US West Coast hotspots of communications technology innovation.

Beats: Software-defined networking (SDN), network functions virtualization (NFV), IP networking, and colored foods (such as 'green rice').

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like