In a previous article that addressed NFV phase implementation requirements, we discussed several technology drivers that are having an immediate impact on NFV implementation. (See Navigating the NFV Phase.)
However, even though there are no defined, clear-cut stages of NFV and cloud migration, informally there is agreement that future iterations of cloud deployment will drive new requirements, as well as changes to current evolution factors. The next and future iteration is already emerging and is often referred to as the Cloud Native, Cloudified or the Cloud Optimized phase. Whatever you choose to call it, there are unique requirements associated with this phase.
We touched on some of these in the previous article, but there is one area that I see as having a profound impact on the ability to deliver on the promise of cloud optimization. Not surprisingly, it relates to software design. In this case, specifically, there are two software driven requirements: the delivery of cloud optimized VNFs as well as the impact of decomposed microservices.
From the outset, the idea behind NFV was to start fresh with a new software pallet vs. taking existing software and installing new hooks and interfaces to enable them run in the cloud. There is some vendor evidence of this happening in the NFV phase, but clearly more work needs to be done to meet the dynamic workload and service requirements of an optimized cloud. (See Vodafone: Desperately Seeking Cloud-Centric Tech.)
One approach proposed to support cloud-optimized VNF redesign is decomposing services into small cloud native applications. The idea of implementing microservices is an established convention outside of telecom, but given the complexity curve NFV has had to manage in the current implementation cycle, it rightly was assessed as a future enhancement. Still, given the pressure to scale and monetize cloud networks, microservices continues to gain market awareness and support.
Microservices conceptually applies software abstraction to a network function to effectively break it into smaller reusable pieces (e.g., transcoding, encryption, voice support, policy control). Obviously, this approach has a number of advantages out of the gate from a service agility and cost perspective, as well as aligning with open source and third-party code reuse development initiatives.
However, there are several challenges to manage as well. Given microservices are broken into small independent pieces, they are stateless, which means additional requirements for managing and enforcing policy control and security measures are introduced. Moreover, since microservices can run in fully distributed architecture and any unassociated server pool, new design demands are imposed to support and manage load balancing in a fully distributed cloud.
In summary, while the Cloud Optimized phase will deliver considerable benefit, it will also impose new requirements and create new challenges for even existing virtualized products such as Session Border Controllers (SBCs) and Policy Controllers.
We will be conducting a deep dive and discussing the impact of these factors in more detail in an upcoming webinar on November 16 that I will be hosting with speakers from Sonus and AT&T. This second part of our webinar series will give you a deeper understanding of the RTC requirements in the cloud and how a "cloud-optimized" SBC will meet the scale, performance and flexibility required to deploy RTC in the cloud. This webinar will give you more insight. Click here to register for Real-Time Communications in the Cloud – Dismantling Cloud Myths with Innovation on Wednesday, November 16, 2016, 11:00 a.m. New York/4:00 p.m. London. Hope you can join us.
— Jim Hodges, Senior Analyst, Heavy Reading
This blog is sponsored by Sonus.