Moving From All-IP to All-Cloud Networks
Sandra O'Boyle, Senior Analyst – CEM & Customer Analytics, Heavy Reading
Today, every operator is going through a "digital transformation" of some sort, depending on its business goals -- ranging from full organizational digital transformation to more focused initiatives to deliver a digital customer experience, increase service innovation and accelerate speed to market. This determines the starting point, but each journey is a long-term process that requires top-level design and support, a strategic understanding of the future need for openness, and the ability to work with partners and developers as part of a digital ecosystem or marketplace.
The network is the foundation for operators to build a better bond with their customers, deliver better services, differentiate from over-the-top or cloud and content providers and so on. Network functions virtualization (NFV) and software-defined networking (SDN) turn the network into a set of software-based components. Network function -- the intelligence of the network -- exists as a set of software components at application level, and its execution environment is a composition of software components at cloud platform level.
For operators that started early on the journey to virtualize network functions, moving to cloud-native networks is a priority. Many are vocal about the need for the agility of cloud-native telco applications, with the potential to simplify telcos' complicated software and make network operations and management more efficient.
At Light Reading's recent BCE event in Austin, Vodafone's Matt Beal said another key driver for cloud-native networks is that cloud providers and over-the-top services enjoy the agility of cloud-native platforms and applications. Referring to Vodafone's own efforts to go all-cloud, he quipped, "There's nothing like being two thirds of the way done with a race when the winner finishes."
All-cloud or cloud-native networks have three common characteristics:
- Hardware resource pooling: This includes the pooling of data center computing and storage resources, WAN bandwidth resources and access network resources such as wireless baseband and air interface. Pooled resources transform traditional silos in network architecture that dedicate one piece of hardware to one application, effectively maximizing hardware resource sharing.
- Distributed software architecture: This is key to building large-scale network systems. Fully distributed systems based on cloud-native software architecture offer key benefits including:
- Scalability -- enabling on-demand scale-in/out
- Service quality -- enabling carrier-grade reliability for telecom services
- Service agility -- enabling unified gray release and upgrade of data software on PaaS, significantly shortening service development and provisioning times.
- Automated operations: Backed by hardware resource pooling and distributed software architecture, with big data analytics and machine learning, cloud networks support proactive maintenance as well as automated service deployment, resource scheduling and fault handling, thereby minimizing human intervention and boosting operational efficiency.
According to Heavy Reading's research, operators are looking to break down virtualized network functions (VNFs) and monolithic apps (e.g., OSSs) into software components that are reusable, self-contained, pretested, management-ready and able to be automatically instantiated and managed on their cloud infrastructure in different combinations, in response to end-user needs.
To deliver an instance of a customer-facing service, for example, an operator needs to be able to assemble different combinations of software-defined NFV infrastructure components -- virtual machines, virtual switches, etc. -- to support the execution of each type of VNF needed to deliver a customer-facing service, and then stitch all the components in the service path together and manage them as an end-to-end service.
Webscale companies use extreme automation to instantiate and manage these components on their software-driven (cloud) infrastructure in different combinations, in response to end-user needs. And they can change these combinations rapidly and continually through continuous integration/deployment.
To do this effectively, telcos need to:
- Build/acquire software components that use a service-oriented approach or micro-services.
- Have a way of integrating software components quickly and easily -- the "plug and play" or "Lego blocks" vision of composing services or business processes.
- Have a way to manage compositions dynamically as they evolve, bearing in mind that components are likely to come from different layers of the network, IT and third parties.
VNFs based on micro-services running in containers on cloud-native architecture with standard reusable components that are highly available and hyper-scalable is a worthy goal, but one that will take time to achieve. And even then, containers may not be appropriate for every telco application or VNF.
What is crucial is that the telco's cloud infrastructure and management and orchestration software be flexible and open enough to support new network technology (e.g., 5G), new types of services (e.g., IoT, low-latency edge services), different workload requirements for performance, as well as the ability to enable and manage containers and virtual machines for certain telco apps, etc. And it's not just the network that needs to be flexible; the entire telco business has to be more flexible to launch new services, adapt business models, attract new partners, etc.
The journey to cloud networks makes sense in terms of aligning networks and IT applications, but comes with a steep learning curve that requires telcos to either take on a more direct technology and service integrator role and/or choose the right partners to help them out.
This blog is sponsored by Huawei.
— Sandra O'Boyle, Senior Analyst, Heavy Reading