x
Heavy Reading Research

IT Servers Need Tweaking for NFV

One of the most significant changes that network functions virtualization (NFV) will introduce into the CSP environment is the use of generic IT servers rather than purpose-built, dedicated hardware platforms to run network applications and services. In addition to lowering the cost of the underlying hardware, it is expected that running virtualized network functions (VNFs) on standardized servers will give CSPs greater flexibility and improved resource utilization, and potentially converge networks and IT onto common infrastructure.

In my latest report, NFV & Telco Data Center Servers: Key Considerations & Technologies, I discuss the new requirements emerging from CSPs looking to migrate network functions from proprietary, dedicated hardware platforms to standardized IT servers. The report highlights the strategies being taken by leading vendors to support CSPs' need for performance, availability and scalability, and it analyzes how cloud and telecom environments differ, with discussion on the potential impact on using common infrastructure for all services within a telco data center.

IT servers are generally designed for enterprise applications, not telecom functions. VNFs consume server hardware resources differently than everyday IT applications. Data plane-heavy functions present a particular challenge. Understanding the differences in resource requirements for various categories of applications is needed to ensure the needed performance can be achieved.

The hypervisor-managed approach introduces a performance hit, and several technologies, including SR-IOV, PCI Pass Through, and hardware and software acceleration, are being proposed to address them. ETSI believes hypervisors play a key role in creating the execution environment, so it recommends their use, rather than bare metal for most applications. Ultimately, it wants services to be cloudified so resources can be pooled, shared and dynamically allocated.

In addition, ETSI presents high-level architectural goals in the areas of portability, reliability, security and manageability, but has stopped short of publishing specifications for the hardware platform. The Open Compute Project has developed a reference architecture, but only for servers destined for cloud applications. Despite some commonalities between IT and telecom workloads, there remain enough differences that suggest that there may be good reason to keep separate platforms for different types of workloads. The distinction is less IT vs. telco, but rather based on the attributes of the workloads themselves. Having multiple platforms, however, would not preclude the ability to manage the infrastructure with common management.

— Roz Roseboro, Senior Analyst, Heavy Reading

zhulei 8/12/2015 | 12:31:44 AM
Promoting NFV The works to promote NFV still need understanding hardware. If reading software world, no any hints to decompose software for cloudy reason, but telecom vendors. 

So, NFV still stable hardware to meet telcom environments, with some standards, which open to all manufactures.
j_beck 8/7/2015 | 10:33:43 AM
What is the whole purpose of NVF? Hi,

thank you for brining this up, as many people could believe that virtual clouds could save IT.

If the main purpose of NFV would be virtualization, then compute and IO intense applications would be today in troubles when thinking in hardware and cloud terms.

My point is, that we should not forget that NFV adds a lot of other benefits to the telecom service providers, that help them reducing CAPEX and OPEX.

NFV adds a unified and way of high availability, scalability and elasticity or network functions. This is achieved by be the orchestration layer that is based on standardized interface.

Another point is that NFV fuels the modularization of network elements to an unknown level. In consequence more virtual network functions will form the network of tomorrow.

For example: It is much easier to fire up another NFV instance to add capacity, than to set up a traditional bare metal machine. NFV orchestrators or load balancers or media resource brokers could then dispatch the sessions accordingly to the new pool of instances.

And finally cloud provider that take NFV with telco grade demands serious offer so called Ironic flavors, that are based on physical servers. There you get the MIPS and IO you need.

NFV is still in evolution. Lets make it happen.

Johannes
ethertype 8/5/2015 | 11:08:56 AM
Re: So... Maybe a better conclusion is that true NFV for heavy dataplane loads is still a research project. For other stuff it's fine.

We will see more and more high throughput packet processing moving to x86 servers.  We won't see those functions moved around arbitrarily, like other VNF's, as if the underlying infrastructure didn't matter.
brooks7 8/4/2015 | 9:36:27 PM
Re: So... Thanks for the answers that you guys have posted.  My point would be that specific server hardware would have to be a "model 2" that is used or it is not worth having COTS.  Once you get to a proliferation on hardware there is no point to NFV, since the entire point was to have a spare bank of hardware to run stuff on.

So my comment would be that this is the limit of NFV and it should go no further.

seven

 
dwx 8/4/2015 | 8:23:41 PM
Re: So... COTS can't solve all of your problems.  General purpose CPUs are just that, general purpose CPUs.   NFV right now is good for moderate CPU and low dataplane throughput applications.  Once you make the CPU process millions of PPS, it doesn't have much time to do other things.   You throw on tasks like FW, NAT, etc. where you are using the CPU for something else, your PPS performance drops like a rock.  NFV is good for being able to use the same piece of hardware for multiple things, instead of buying specialized appliances, but they aren't going to replace everything... 

You take a hit in terms of throughput and especially power efficiency versus custom hardware.  
Roz Rose 8/4/2015 | 5:13:25 PM
Re: So... Not necessarily. It depends on the function and its performance requirements.  COTS HW could be perfectly fine for some functions. However, some put a larger burden on compute resources than others, so additional technologies might need to be deployed to run well in a virtualized environment.
brooks7 8/4/2015 | 3:37:15 PM
So... We no longer want to need COTS hardware....that means existing hardware?  We need to make new hardware that is special (read higher price)?

seven

 
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE