NFV's Unexpected Bonus
I recently experienced major travel disruption while returning from business in Europe. The likelihood of getting home on time looked grim. However, once I got to the airline service desk I was quickly rebooked on a flight with an upgrade(!) to business class. You can bet I was smiling when I got to my seat. Getting home on time (more-or-less) was good; the unexpected upgrade was an amazing gift.
While it's good to get exactly what you paid for, an unexpected bonus calls for celebration. Service providers moving from networking appliances to pure-play network functions virtualization (NFV) are investing to gain cost savings, service agility and openness. It turns out that moving to pure-play virtualization brings along unexpected benefits that they will definitely celebrate.
Pure-play virtualization is a means of implementing NFV with pure software running on open hardware. In contrast, hybrid NFV is configured using closed or proprietary platforms with hardware acceleration, or using external boxes to implement networking functions.
Pure-play virtualization gives the service provider choices regarding:
- What size, speed and brand of server to use.
- Which software virtual networking functions (VNFs) to use.
- Where to deploy virtualization, be it in the cloud, next-gen central office or service edge.
- What else to put on the platform.
It is that last item that delivers unexpected bonuses. Because the hardware, operating system and virtualization layer are all open, the operator can choose to deploy additional functionality, which might not be related to user services.
Leveraging existing applications
One of the easiest ways to leverage the deployment of pure-play virtualization is by deploying applications that have already been virtualized. I am talking about non-networking functions, not VNFs.
For example, it might be beneficial to move a virtualized application from its current cloud hosting into a central office or even all the way out to the service edge, creating "micro-clouds." Doing so would reduce latency and bandwidth for accessing the application. It also enables standalone operation when the network link goes down.
Consider the case of an application like a WiFi controller or badge access system running in a cloud. By moving these down to a pure-play server you have increased the availability of the services, because they no longer require network access to function.
In addition, service providers often have software applications that are run in a native or bare metal mode. Pure-play virtualization drives the deployment of open virtualization servers throughout the network, creating incentive to virtualize these existing applications to enable redeployment further out in the network.
For example, consider an expert system for a large number of retail stores. It might make sense to virtualize this system and deploy it to the edge to improve latency and reduce the data traffic required for image files. You probably wouldn't take this step if it also required deploying another server. Doing this is easy if you are taking advantage of an open server that is already deployed for virtualized services.
Deploy new applications for new services
In addition to the existing applications mentioned above, there are emerging applications that can benefit from being deployed out in the network. One example is endpoint protection, as described in Network World:
- Rather than looking for signatures of known malware as traditional anti-virus software does, next-generation endpoint protection platforms analyze processes, changes and connections in order to spot activity that indicates foul play and while that approach is better at catching zero-day exploits, issues remain ... The value of endpoint protection platforms is that they can identify specific attacks and speed the response to them once they are detected. They do this by gathering information about communications that go on among endpoints and other devices on the network, as well as changes made to the endpoint itself that may indicate compromise.
Endpoint protection is different from today's VNFs in that it is not predominately targeted at forwarding or processing user traffic. Rather, it is a monitor that enables security services without the need for endpoint agents. The presence of an open platform at the customer site enables easy deployment of innovative services such as endpoint protection, without the need for additional equipment.
Remember to utilize that open compute platform
We can easily forget all that the open compute platform can do. While the virtualization system and control (e.g. vSwitch and OpenStack) are targeted at supporting VNFs running on KVM, there is still a Linux kernel and shell underneath it all. Being able to deploy native Linux utilities like ping, netstat or traceroute out in the network is a very powerful concept, and very exciting to network engineers. Furthermore, engineers can create custom scripts or programs that run directly on the platform and perform valuable tasks. Some examples include:
- Custom initialization or turn-up functionality.
- Collection and binning of statistics on packet traffic and/or resource utilization.
- Health monitoring and/or fault detection.
- Auditing of inventory and configuration of physical and virtual devices.
- Troubleshooting functionality optimized for the provider's network.
- Accessing virtualization management functions locally to simplify customer portal applications.
- Increasing system security by patching security holes immediately and without supplier intervention. (Linux kernel code is open to the public and can be viewed and tested by everyone, and fixes are immediately available.)
You wouldn't expose this level of access to end users, but it gives the service provider tremendous flexibility.
Expect the unexpected benefits of pure-play NFV
So, go ahead: Deploy services built on pure-play virtualization for the expected benefits of cost savings and service agility. And now you can deploy with the expectation of bonus benefits!
— Prayson Pate, CTO, Overture Networks Inc.