While there is a fair degree of hype around the term 'cloud native' and plenty of misuse by software marketeers, it is clearly an important topic among CSP CTOs and CIOs.

James Crawshaw, Principal Analyst, Service Provider Operations and IT, Omdia

July 2, 2019

4 Min Read
Why Telcos Are Going (Cloud) Native

Interest in the term "cloud native" has increased ten-fold over the last three years, according to Google Trends. While there is a fair degree of hype around the term and plenty of misuse by software marketeers, it is clearly an important topic among CSP CTOs and CIOs and worthy of some consideration.

According to the Cloud Native Computing Foundation (which knows a thing or two about cloud native), the term refers to software that is container packaged, dynamically managed and microservices-oriented. Containers allow a high level of resource isolation, foster code and component reuse and simplify operations. A microservices architecture refers to software components that are loosely coupled with dependencies explicitly described. The dynamic management bit refers to the use of a central orchestrator (typically Kubernetes) that improves resource utilization (compute and storage).

In essence, cloud-native software is designed, developed and optimized to exploit cloud technology (i.e., distributed processing and data stores). Some aspects of cloud native use "traditional" software development patterns such as automation (infrastructure and systems), API integrations and service-oriented architectures. The new bits that are specific to cloud-native patterns are the microservices architecture, containerized services and distributed management and orchestration.

Microservices represent the decomposition of monolithic business systems into independently deployable software functions. Each microservice can be developed, updated and scaled independently. As Marc Price, CTO of Matrixx Software, writes here, "Each microservice provides a discrete function serving a specific task or business goal and is isolated with a well-defined interface to communicate with other sets of services. A cloud-native application may comprise dozens or even hundreds of services, where each service may have thousands of instances. New services may be spun up, torn down, or changed very quickly and the expectation is that each service must be agile, composable, elastic, highly available, and resilient."

One of the big assumptions behind microservices is that they are assumed to be stateless and they rely on backing services for any aspect of long-term memory. With stateless communications, the receiver (a server) does not retain any session information. Every packet of information from the client can be understood in isolation. This reduces the server load but may require additional information in every request, increasing the volume of communication. As such there is a trade off between server load and network load.

According to consultant Tom Nolle, "State or 'context' is intrinsic to virtually all business applications... Thus, getting state/context into cloud-native is critical." Similarly, Matrixx's Price observes that "Stateful microservices are particularly challenging, as state must be available to every processing node for accuracy and responsiveness. Hence, high-performance writes are committed to each node and fully recoverable. Traffic routing should account for circuit breaking, where retries should be avoided, and latency-aware load balancing employed for optimal path finding." So while statelessness might be an aspiration for microservice design, in practice some comprises will need to be made, particularly when it comes to VNFs and supporting OSS/BSS. In particular, for ultra-low latency applications, stateful microservices are required.

Telcos have a large software estate and there is no magic switch to flip everything cloud native. So the vanguards in cloud native tend to be greenfield operators or challengers. Verizon's prepaid, bring-your-own-device brand Visible is a case in point. (See Verizon Quietly Builds a Completely Cloud-Based Wireless Service.)

By adopting the latest in cloud-native software they are able to change their service offering and pricing at will through software configuration, not expensive and lengthy code rewrites. Going cloud first also allows them to grow their IT resources as the business grows. As Adil Belihomji, head of technology, states, "Scaling with our customers makes much more sense than just rolling in millions and millions of dollars spent on hardware."

Mobile challenger Three UK is another cloud-native pioneer. They are halfway through a transition from over 90% on-premises software to over 90% in the cloud. This will enable them to reduce the number of IT servers from 2,397 in 2017 to just 40 by 2021. By 2023 they expect to have IT costs 31% below the level of 2016, pre-transition. (See Three UK to Cut Two Thirds of Tech Jobs in Digital Makeover.)

Of course, legacy applications can be deployed in private or public cloud "as-is." Legacy applications can even be deployed in containers. But the true benefits of cloud native are only realized when applications are designed for dynamic scalability, reliability and platform independence.

To learn more about cloud native, check out this webinar: 5G: Why Cloud Natives Will Thrive & Cloud Tourists Will Struggle.

This article is sponsored by Matrixx Software.

— James Crawshaw, Senior Analyst, Heavy Reading

Read more about:

Omdia

About the Author(s)

James Crawshaw

Principal Analyst, Service Provider Operations and IT, Omdia

James Crawshaw is a contributing analyst to Heavy Reading's Insider reports series. He has more than 15 years of experience as an analyst covering technology and telecom companies for investment banks and industry research firms. He previously worked as a fund manager and a management consultant in industry.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like