In our last article, we looked at how the disaggregated and cloud-native design of the packet core is advantageous for scalability and efficiency. The issue of how throughput is maximized and latency minimized with integrated IP services was discussed along with the importance of deployment flexibility. We also looked at the importance of the packet core being able to support multi-generational technologies all the way back to 2G.
The critical shift in moving to virtualization and cloud-native is that your competitive advantage will come from moving faster to meet the needs of your customers — especially in the case of enterprises that are using wireless networks as a key enabler of Industry 4.0 use cases.
Deployment automation within the packet core
In a K8s environment, Helm charts are used for automated day one installation, provisioning and basic configuration management of packet core CNFs. Day two management also needs to be automated and simplified in terms of workloads that range from scaling, based on application-specific metrics, and in-service software upgrades. In addition, for reusability of CNFs in different environments (e.g., development, staging, production environments; private or public clouds), you can use Helm to leverage parameterized manifests and configuration maps to generate deployment specifications and configurations. This is advantageous when a large number of packet core CNF instances are needed, for example, for multiple edge or mobile access edge computing (MEC) locations.
Helm charts are also useful for upgradeability tasks such as continuous integration and continuous delivery (CI/CD). They can be integrated into a Jenkin’s pipeline to update a CNF based, for instance, on the existence of a new or updated image version — taking advantage of the K8s rolling-update capability. K8s settings are used to control a number of pod-related aspects, including the number of new pods to spin up at once, the number of outstanding pods, health check of new pods and the delay in deleting old pods.
K8s can also be configured to automate everything from monitoring alarms for KPI and KCI indicators to scale-in or scale-out based on CPU and memory utilization metrics using its horizontal pod auto-scaler (HPA) or custom application-specific metrics.
Overcoming packet core networking challenges
Using K8s does create some architectural challenges for a packet core, which we examined in the last article. It also creates networking challenges such as the need to support multiple IP interfaces in different routing contexts, which is critical within a packet core for isolation.
Maintaining IP addresses
The first issue is that the ephemeral IP address used by K8s lives and dies with the pod instance. One of the key features of a cloud-native approach is its ability to near-instantly scale resources to match demand, meaning that pods are constantly being spun up and down. The packet core depends on knowing the peer’s identity using its IP address and when the pod is reinstated, it is assigned a new IP address from K8s’ pool of ephemeral IP addresses. This breaks the connection to the CNF within the packet core.
There are two approaches that can be used to overcome this issue. The first is to bypass K8s networking altogether, by plumbing the packet core application pods directly to the node’s interfaces.
The other approach is to tunnel through K8s, thus preserving the original IP address. K8s “Kube-router” uses tunneling mode for private clusters and when used at the edge of the cluster exposes the service endpoint as a K8s service to external peers.
Multiple IP addresses
In the packet core, each pod requires a set of IP addresses depending on the routing context. These include the management interface, signaling interfaces to other network functions such as the policy control and rules function (PCRF) and online charging system (OCS), and network interfaces to the access and service networks such as the Internet and public/private voice.
To solve this a container network interface (CNI) that can call multiple CNI plug-ins is used. This enables multiple network interfaces on a pod beyond the default network interface used for pod-to-pod communication.
With a cloud-native, state-efficient design and the right level of software disaggregation, it is possible for a micro-services architecture to implement packet core CNFs to enable maximum transaction rates and throughput. It can satisfy key operational networking requirements such as multiple network interfaces with routing isolation and the preservation of IP addresses. Automating deployments for enhanced day one and day two serviceability is well supported by K8s with tools such as Helm charts.
We strongly recommend embracing a cloud-native approach to architecting your packet core. In the long run, this will enable you to meet the unique needs of diverse customers and speed your time to market with new services — all the while ensuring service continuity and customer satisfaction.
Want to know more? Visit our Cloud Packet Core solutions page to see how our cloud-native features and capabilities help you deploy a webscale-class packet core.
— Robert McManus, Senior Product Marketing Manager, Nokia
This content is sponsored by Nokia.