x
NFV Tests & Trials

EXCLUSIVE! NFV Interop Evaluation Results

Interoperability Findings and Challenges
We unveiled a number of important findings and encountered a number of challenges during the Phase 1 evaluation.

VNF Configuration Options with and without HEAT
There are many tools that OpenStack provides to instantiate a VNF. They range from a simple instantiation with nova commands and a manual configuration, or pushing a configuration file with the boot command itself, to using a Heat template or even a separate VNF Manager. Each of these methods has its merits and challenges.

Once the VNF has booted, it needs to find its network-layer and optionally application-layer configuration file in a second step. This configuration file will define network interface configurations, passwords, licenses, etc. Again, there are different ways to provide this functionality:

Table 1: VNF Instantiation Permutations

VNF tested with NFVI using… Alcatel-Lucent Cloudband Cisco NFVI Huawei FusionSphere Juniper Contrail
Alcatel-Lucent VSR Nova Nova
Alcatel-Lucent VMG Nova + User Data
Alcatel-Lucent VMM Nova Nova
Cisco ASAv Nova + Config File
Cisco CSR1000v Nova + Config File Nova + Config File
Cobham Wireless TeraVM Nova
Hitachi vMC Nova + User Data Nova + User Data
Huawei VNE Nova Nova
Ineoquest IQDialogue ASM Nova + Cloud-init
Ineoquest DVA Nova + Cloud-init
Juniper vMX Nova + Heat
Juniper vSRX Nova + Heat
Metaswitch Perimeta vSBC Nova Nova
NetNumber TITAN Nova + Heat Nova + Heat
Netrounds Test Agent Nova Nova
Procera PacketLogic/V Nova
Sonus SBC SWe VNF Manager + YAML VNFD Templates
Source: EANTC

Nova
The basic way to instantiate a VNF is using OpenStack's nova commands (or the GUI wrapper). The operator has to create a flavor, defining resource allocations for virtual CPU, storage and RAM. Subsequently the VNF can be started with a nova boot command. Next, the network layer configuration needs to be supplied to the VNF -- this is done by pushing a management network port to the VNF as the first network parameter. The VNF usually requests an IP address through directed DHCP, which can then be used to contact it and start application-layer configuration.

Nova with Config Drive
It is possible to simplify the second step of the VNF instantiation process by supplying the VNF with a configuration drive access. The operator will upload the VNF config file in advance and will provide the file name as part of the nova boot command. This avoids the need to bootstrap through a management IP address. Specifically, if the VNF does not want to use DHCP to identify its IP configuration, this option is helpful. With this method, the VNF first has to complete its bootup process before it can access the configuration file.

CloudInit Script
The CloudInit script comes in when the Config Drive cannot be used -- in case the VNF needs to be configured at boot time, not after boot time. In this case, the CloudInit script(s) can be pushed by nova and can be accessed via the VNF at boot time.

The three options above are supported by any OpenStack implementation.

But there are options that may, or may not, be supported by any OpenStack implementation.

Heat
Heat is an OpenStack template that is compiled and then converted into a series of commands that can instantiate a service along with all the dependent resources (networks, policies and VNFs). Heat requires support from the NFVi, which the following implementations provided at the time of our test:

  • Alcatel-Lucent Cloudband
  • Juniper Contrail

HEAT was the most straightforward and cleanest configuration option for instantiating and tearing down VNFs. However, we observed some challenges defining the templates: Sometimes version numbers did not match (backwards compatibility), and some versions of HEAT are limited in translation -- they take only unique network IDs (UUIDs) instead of network names.

VNF Manager
A VNF could also provide its own management entity -- a VNF Manager, or VNFM -- that could shield all the details of its bootup configuration process from the operator. The VNFM would manage the complete instantiation and configuration process opaquely. There has been an extensive debate in the ETSI NFV group whether VNFs should bring their own VNFMs or use a generic VNF manager.

Some vendors elected to go with OpenStack Heat templates for the sake of open interoperability. Only one vendor, Sonus, brought a VNF manager. In this instance, Sonus provided a YAML-formatted VNFD (VNF Descriptor) file describing the VNF parameters, which were then used by the VNF manager to configure the VNF.

The Sonus VNF Manager
The Sonus VNF Manager

Next page: Further Interoperability Findings and Challenges

Previous Page
5 of 6
Next Page
muzza2011 12/10/2015 | 2:39:46 PM
Network now needs a permanent Proof of Concept lab We're at the 'art of the possible' stage, rather than the 'start of the probable' regarding live deployment... which if taken at face value, should wholly sidestep the heinous blunder of productionising a nine 5's infrastructure in for a five 9's expectation.

As much as the IT proposition to a user involves all layers of the ISO stack, if you screw the network you've lost the farm, which then negates any major investment in state of the art delivery systems up the stack.

Any tech worth their weight will crash and burn this stuff on Proof of Concept lab... and weld all the fork doors closed... as it'll be their 'nads on the line should if fail in production, not any IT upper heirarchy who decided to make an (unwise) executive decision.

Fail means *anything* regardless whether its a SNAFU or an hack-attack vector that no-one tested. 

In short, test to destruction, armour-plate the resultant design, and always have a PoC lab bubbling away in the background for the next iteration, because the genie is now out of the bottle and won't go back in.
mhhf1ve 12/8/2015 | 8:56:52 PM
Open source doesn't necessarily mean any support available... It's always interesting to see how open source platforms still have gaping holes with major unsupported functions. Every fork/flavor of an open source project means more than a few potential gaps for interoperability with other forks.

 
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE