x
NFV Tests & Trials

EXCLUSIVE! NFV Interop Evaluation Results

Test Setup and Coverage
This section describes the technical coverage, logical topologies and physical test bed, and explains what the 'Pass' entries in the test results matrix (see next section) actually mean.

(If you're desperate to see the results, please feel free to skip forward and come back later.)

When we started the NIA test program, a couple of factors quickly became obvious:

1. This is a big industry with a lot of new VNFs from many established vendors and startups. Asking all vendors to travel somewhere for some joint testing slots would not scale. We had to come up with a distributed, remote support scenario with scheduled test slots.

2. "Virtual" does not mean "interchangeable." Initially we thought about running all the tests on a common infrastructure. An expectation of NFV is the removal of hard-coded linkages between software and hardware and, consequently, the NFVi software should support a range of third-party hardware. Some vendors informed us this would be possible but it would be a separate testing topic that was outside the scope of this first testing phase. It should be noted that this open interoperability does add time and cost to testing and therefore, in practice, may limit the degree of openness deployed in networks. Using public cloud providers was not an option for our evaluation setup, not only because of hardware interoperability but also because none of them provided the networking infrastructure (NICs, switches, internal IP addressing) and flexible/secure connectivity we needed.

3. Reporting results transparently has always been a big asset of EANTC tests. To help the industry to accelerate and to save service providers lab testing cycles, our results need to be documented and detailed enough that they can be referenced clearly and reproduced by anybody anywhere.

4. At this early stage of the industry, more time is actually required to 'bring up' VNFs -- that is, integrate them with individual NFVis -- than for the actual testing. The integration time ranged between a couple of hours to four weeks per VNF. This is a major factor; even exchanging the required information between the vendors and the test lab is not easy, as none of these descriptors are sufficiently standardized.

5. The human factor still governs all activities. Tests can be as virtual as possible, but the integration and troubleshooting support is still provided by humans. (Well, so we assume… we did not see all faces on Webex…) Support times need to be coordinated, cultural differences need to be taken into account, and so on. The evaluation involved people located in 11 countries and multiple time zones, adding delays and reducing available test time, which ultimately impacted the number of VNF combinations tested.

Phase 1 evaluation scope
This report covers the first campaign of the New IP Agency's NFVi-VNF interoperability program. We chose this topic because there is the highest likelihood in the NFV environment that virtual network functions will be provided by multiple entities and they will all need to work with the deployed NFVi. In addition, there is already a sizeable set of options and combinations -- there are already more than 300 VNFs and 20 NFVis commercially available already.

We focused the tests on VNF lifecycle management, as set out in the ETSI GS NFV-MAN 001 Appendix B, the well-known 'ETSI Phase 1 MANO' document. Our intent was not to test the individual applications for functionality but to evaluate the interoperability of the VNFs' management aspects with different NFVis.

Future NIA testing phases will focus on high availability, management and orchestration, fault management and other aspects of the NFV framework as soon as the industry is ready for multivendor interoperability testing in these areas.

Test bed topology
The test topology was fairly straightforward. The four NFVi vendors, our testing partner Ixia and EANTC set up the hardware in our lab in Berlin. Each NFVi vendor provided its own hardware (for the full details, see the appendix):

  • Alcatel-Lucent Cloudband came with a full OpenStack deployment (three control and eight compute nodes including local storage, plus a router for internal connectivity).
  • Cisco NFVi provided a full OpenStack deployment as well, including three redundant control nodes, three compute nodes, a triple-redundant storage system, two build nodes and a router plus a switch connecting the infrastructure internally.
  • Huawei provided FusionSphere with three hot-standby control nodes, one compute node with a lot of local storage, and a physical router for internal connection between servers.
  • Juniper provided one Contrail server which implemented the OpenStack environment using the virtual deployment option: one virtual control node, one virtual compute node, and a virtual router. This satisfied the minimum testing requirements in this phase.

The NIA Phase 1 NFV interoperability test bed at EANTC's headquarters in Berlin.
The NIA Phase 1 NFV interoperability test bed at EANTC's headquarters in Berlin.

Test methodology
With each NFVi-VNF combination, we covered the following mandatory test cases:

1. VNF Package on-boarding
Load the VNF package image into the NFVi using either the virtual infrastructure manager (VIM) or the VNF manager; verify that the loaded package is in an integral shape and contains all the required configurations, scripts and binary sources.

2. Standalone Instantiation
Instantiate a single VNF copy including resource allocation by the VIM, initiating the boot sequence and other configurations.

3. Graceful Termination
Validate that the NFVi releases resources upon graceful termination and that VNFs can be restarted without any further manual intervention required.

4. Forceful Termination
"Power-off" the VNF forcefully through the VIM; verify that all resources are released and related runtime entries cleared and that the VNF can be restarted subsequently.

5. VNF Package Deletion
Verify that the deletion of the VNF package removes all related files and configurations without affecting other packages.

In addition, there were the following ancillary test cases:

1. VNF Instance Modification
Verify that certain modifications of a running VNF instance trigger a notification event by the NFVi and that the VNF continues to function properly: Add one more virtual network interface; configure VLAN and IP for the newly added interface; add one more CPU core; retrieve current negotiation.

2. Restart of One Virtual Instance
Verify the effects of restarting one VNF instance during live operation of a second VNF of the same type; validate that the VNF instance gets restarted correctly while traffic through the second VNF is unaffected.

3. Persistence and Stability
Test how the VNF survives forced removal of a virtual port in use and subsequent reconnection of that virtual port. The test used two variants -- interface detach and port status update.

Data Plane Verification (Traffic Configuration)
From the start, we wanted to verify the virtual switch data plane connectivity as part of the interoperability test campaign. The virtual switch is an important part of the NFVi; the correct connectivity of virtual ports with a VNF is fundamental to any network-related VNF's functions. When estimating potential areas of potential interoperability issues with OpenStack, we guessed that network functions would yield the highest rate of problems: Traditional enterprise cloud applications have been running on OpenStack for years. They are compute- and storage-heavy but have never cared so much for versatile networking options (and performance, of course, but that is out of scope here).

We wanted to evaluate the vSwitch/VNF interoperability in as complete a way as possible, so we defined a network configuration by attaching the VNF to a physical tester port on one side (the "left network") and to a virtual tester port on the other side (the "right network").

Traffic configuration per VNF type
Traffic configuration per VNF type

With help from Ixia, we configured and pre-staged the test setup in our lab. The physical port was quickly connected with a legacy Ixia Ethernet load generator port -- we did not require anything special, since this test focuses only functional data plane aspects. In addition, the Ixia team deployed a centralized Chassis VM and Client user interface VM (both currently running on Microsoft Windows), plus a virtual load module per NFVi (running Linux as the guest OS). The client user interface VM managed both the virtual and real chassis and sent bidirectional traffic when needed. We used two Isia test tools -- IxNetwork for VMs that were based on IP traffic and IxLOAD for VMs that required application-layer emulation.

All virtual routers, virtual firewalls and virtual DPI systems were tested with IP traffic. Most of the virtual EPC, IMS and SIP solutions were tested with Voice over IP (SIP) traffic, except for a few that were configured to just pass native IPv4 traffic. Due to the issues that some NFVis and VNFs have with IPv6 at this point, we deviated from our generic rule to always use dual-stack IPv4 and IPv6 data plane configurations. Instead, the whole test bed (shamefully!) used only IPv4 for VNFs and for management.

Next page: The Results Matrix

Previous Page
3 of 6
Next Page
muzza2011 12/10/2015 | 2:39:46 PM
Network now needs a permanent Proof of Concept lab We're at the 'art of the possible' stage, rather than the 'start of the probable' regarding live deployment... which if taken at face value, should wholly sidestep the heinous blunder of productionising a nine 5's infrastructure in for a five 9's expectation.

As much as the IT proposition to a user involves all layers of the ISO stack, if you screw the network you've lost the farm, which then negates any major investment in state of the art delivery systems up the stack.

Any tech worth their weight will crash and burn this stuff on Proof of Concept lab... and weld all the fork doors closed... as it'll be their 'nads on the line should if fail in production, not any IT upper heirarchy who decided to make an (unwise) executive decision.

Fail means *anything* regardless whether its a SNAFU or an hack-attack vector that no-one tested. 

In short, test to destruction, armour-plate the resultant design, and always have a PoC lab bubbling away in the background for the next iteration, because the genie is now out of the bottle and won't go back in.
mhhf1ve 12/8/2015 | 8:56:52 PM
Open source doesn't necessarily mean any support available... It's always interesting to see how open source platforms still have gaping holes with major unsupported functions. Every fork/flavor of an open source project means more than a few potential gaps for interoperability with other forks.

 
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE