The recent NFV and Carrier SDN event in Denver provided an excellent opportunity to take the pulse of network functions virtualization (NFV) and software-defined networking (SDN) deployments on many levels. Topics and challenges discussed ranged from security to open-source adoption, platform evolution, cloud-native services, automation and orchestration.
And while it's clear that some challenges still exist, progress is being made with more steps possible by virtue of the flexible reference architecture that the cloud embodies. Along these lines, at the event, there was also considerable discussion surrounding the running of virtual machines (VMs) at the network's edge. In fact, given the impact of multi-access edge compute (MEC), 5G and Central Office Re-architected as a Data Center (CORD), it's now readily apparent that the edge will be a strategic focus to execute services and therefore VMs.
Despite the benefits this delivers -- including ultra-low latency services, zero-touch provisioning, lower opex and capex models -- it also demands innovative approaches to VM mobility management and on-boarding. This is because even though adopting a COTS-based NFV infrastructure (NFVi) model promotes hardware optimization, the traditional management approach of running VMs via an Open vSwitch (OVS) introduces VM administrative and on-boarding issues which effectively limit the number of VMs available to run applications. While approaches such as the Data Plane Development Kit (DPDK) can enhance VM availability, it is still not ideal, especially for edge VMs, which are limited by the number of physical servers a site can physically support.
Accordingly, other approaches such as Single Root I/O Virtualization (SR-IOV) have been promoted to deliver added capacity. While SR-IOV does deliver enhanced performance, its architecture tends to also limit the mobility of VMs within a server cluster which in turn limits overall scale performance potential while also injecting complexity into VM management.
In response, and discussed at last week's event, a novel approach -- SmartNICs, which use programmable network interface cards (NICs) to offload OVS management -- continues to gain market momentum. To be clear, SmartNIC momentum has been building for several years, it's just that the burgeoning demand to execute VMs at the edge makes SmartNIC more attractive than ever before. The benefits of SmartNICs are numerous and stem from the fact that they can deliver enhanced throughput without altering the based OVS execution model while also supporting full VM mobility.
This level of VM mobility -- harnessing the ability to run VMs on a server cluster without restriction -- is especially crucial at the edge to optimize the performance of a cluster of compute and storage NFVi resources. In turn, this also results in stronger total cost of ownership (TCO) performance, given resources are managed based on a seamless mobility VM availability model which minimizes the opex hit of power and HVAC costs. Consequently, even though a few challenges for implementing VMs in the NFV based cloud still exist, thankfully due to innovative approaches such as SmartNICs, the foundation of NFVi is solid, robust and capable of supporting the VM mobility and scale that will be vital for meeting the performance requirements of NFV, 5G, MEC and other high-demand edge compute technologies.
Added resources on VM mobility and SmartNICs, including a webinar and Heavy Reading white paper, are available here:
Jim Hodges, Principal Analyst, Heavy Reading
This blog is sponsored by Netronome.