Only by virtualizing and distributing mobile core can operator cost-effectively support the billions of connections needed for 5G and IoT.

February 17, 2016

4 Min Read
AT&T: Virtualized Mobile Core Key to 5G

While a lot of the 5G debate revolves around radio technology, a key component of AT&T's 5G roadmap involves virtualizing its mobile core and then distributing that core closer to the network's edge to support both 5G and the Internet of Things in a cost-effective way.

It's a strategy Heavy Reading Senior Analyst Gabriel Brown calls "important and progressive."

AT&T Inc. (NYSE: T) plans to add "hundreds and even thousands of locations at which we can spin up a [virtualized] core," Paul Greendyk, VP of mobile core and network services for AT&T tells Light Reading. Some of those locations may include central offices, but there will be multiple flavors of locations called nodes or zones, in an AT&T integrated cloud. (See AT&T Unveils Its 5G Roadmap and AT&T Lights Fire Under 5G, Plans 2016 Trials.)

AT&T is already operating a virtualized mobile core that supports 14 million wireless users in the US, Greendyk says. The increased bandwidth and speed of 5G and the billions of connections expected for IoT are driving the operator to push forward with broader deployment.

This distributed approach uses a virtualized mobile core as software deployed on common compute and storage platforms with a hypervisor, as close to the edge of the network as is economic, to support "millions and billions of things connected to the network," for IoT and other applications, he says. As AT&T works to virtualize 30% of its network functions by the end of 2016, "a huge chunk of that work this year is mobile centric," Greendyk says.

"Distributing particularly user plane nodes to edge locations is important, so that operators can better serve novel 4G services and prepare for 5G," Brown says. "Operators need to evolve to a new service-oriented core network -- the Mobile Cloud Service Core -- to create an infrastructure with the flexibility to support new service models and a cost-of-production driven by 'cloud economics,' rather than by specialized hardware," he notes.

Zoom in on carrier SDN strategies in our SDN section here on Light Reading.

Cost is one big driver for AT&T, as it seeks an affordable way to "densify" and scale its mobile core, Greendyk admits. "There is no way that today's cost models would support the kind of growth in the number of devices that need to connect for IoT without radical improvement in cost curves," he says. "What we can do with software licenses is turn up additional instances [of mobile core] without significantly increasing cost curves."

Mobile networks have traditionally had very centralized cores, Greendyk notes. Today, AT&T has had two large data centers to support its mobile core. By distributing its core, AT&T also can reduce the need for backhaul and improve latency.

What Brown terms a "cloud-native mobile core" will support new service models, something operators are actively seeking. "Using software-based networks with automated resource and service orchestration, operators should be able to dynamically create network services optimized for the needs of an application or user group," he says. "This is sometimes referred to as 'network slicing' and is expected to be introduced in advanced 4G networks and to be inherent to the 5G system architecture."

Having that kind of flexibility becomes particularly important as IoT introduces new markets, such as smart cities and connected cars, that will require compute and storage resources in new places where mobile operators aren't currently engaged, Greendyk says.

"When you think about the demands of 5G, being radio-agnostic, context-aware networking, network slicing, for all of these -- NFV and SDN are very much the right answer and are necessary to be able to provide the flexibility we are going to need," he comments. "Rather than build out additional Domain 1 technology, we are moving quickly to do this in a virtualized way" for both cost and flexibility reasons.

Without the flexibility that virtualization provides, defining slices of a network for specific applications would require building a specific core on specific hardware, which will never be a cost-effective approach, versus spinning up a different core using software very quickly, he says. "We can also manage peaks in demand by turning on additional capacity that is software-based."

— Carol Wilson, Editor-at-Large, Light Reading

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like