Network operators will each take their own unique journey to becoming more cloud-native. This heterogeneous, 'lumpy' universe will be with us for quite a while.

Yaron Aboodaga, Product Management Head - CloudBand Infrastructure Software, Nokia

November 12, 2018

4 Min Read
VMs, Containers & the Lumpy Cloud Evolution

The "cloudification" of telco and enterprise networks is happening, like most evolutions, unevenly. As cosmologists would say about the universe, it started smooth and got lumpy. According to the hype, we should all be on a smooth evolutionary path to cloud-native applications, virtualized infrastructure and serverless deployments. But there are gravitational forces at work that are making lumps in our cloud-native universe.

The ideal of microservices running in containers is great. Breaking up applications into their smallest components, independent and loosely coupled, allows development teams to easily change or upgrade one microservice without having to redeploy the entire application. It also speeds up testing by making it possible to release the microservice and watch how it performs in a sandboxed environment, without affecting the running application. In the best use case, it is possible to fully upgrade, test and redeploy a microservice without ever taking the application offline.

This is the software equivalent of hot swapping, which has long been a requirement of the most mission-critical hardware systems. It also enables agile development, allowing for fast failure and iterative development of new features. When run on bare metal servers, containers are also much more efficient at using the cloud infrastructure. Essentially, they dispense with the processing overhead of the hypervisor.

Figure 1: Containers (on the right) do not require the hypervisor overhead. Containers (on the right) do not require the hypervisor overhead.

But not all virtual network functions (VNFs) will go this route. Some VNFs are relatively stable and don't need as high a degree of flexibility, such as the virtual Evolved Packet Core (vEPC), whereas others really benefit from being re-written to take full advantage of microservices and containers, such as cloud RAN.

As you scan across the highly complex world of networking, you will see variations of the trade-off theme played out in multiple dimensions. Beyond the VNFs and what they actually need, there is also the level of effort already invested into infrastructure that supports non-containerized workloads running in VMs. And there is, of course, the extra effort that would be involved by vendors to re-write existing code to support a microservices architecture (MSA), as well as the effort to install and manage bare-metal environments to host containers.

There are some issues with containers. The creation of containers on bare metal is not trivial, with issues around provisioning the network, communications between containers and security (all containers share the same control plane network). Typically, you have to expect delays before the bare metal is ready. OpenStack has a number of projects, such as Calico, Weaveworks and Flannel, to address some of these issues, but containerization is not yet a slam-dunk.

In the meantime, many managers have already placed their bets on VMs running non-containerized applications on top of hypervisors. These are running very well already and, for the time being, managers have bigger fish to fry. With no two networks being alike, each will take its own journey to fully cloud-native, and heterogeneity and lumpy will be the norm for a very long time.

To managers of cloud infrastructure, heterogeneity may sound like a headache requiring several management systems, extra training and an expanding network ops team -- but it doesn't have to be that way. There are approaches to managing cloud infrastructure that can support a heterogeneous environment including monolithic applications running in VMs, containers running in VMs and containers running on bare metal, all managed from a single management application.

To be suitable for a heterogeneous environment, it's critical that your virtual infrastructure management solution is open, and the VNFs are API-driven using well-defined, open interfaces with host-independent, flexible configuration and logging. Ideally, it allows you to manage your entire infrastructure, from VMs to containers, from a single pane of glass, with good analytics and monitoring tools. Your infrastructure should provide flexible hardware support and be applicable to any type of workload -- from telco network core to far edge or from enterprise back office to customer-facing applications -- with robustness, performance and security. Of course, it has to meet key security standards and regulations as well.

Network operators, telco or enterprise, will each take their own journey to cloud-native. Some may never fully implement MSA and containers on bare metal servers; others may move there quickly. It is likely that this heterogeneous, "lumpy" universe will be with us for the foreseeable future. It's a good idea to be properly prepared with a virtual infrastructure management solution that knows how to smooth out the lumps.

— Yaron Aboodaga is the head of the product management team in Nokia's CloudBand Infrastructure Software group.

About the Author(s)

Yaron Aboodaga

Product Management Head - CloudBand Infrastructure Software, Nokia

Yaron has more than 15 years of experience in the Telco and Networking industry, working with Tier 1 customers across the globe, in Solution Architecture and Product Management. 

Today, he serves as Head of the Product Management Team for CloudBand Infrastructure Software, part of the CloudBand NFV solution suite, and a core element in the Nokia Cloud offering.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like