x
NFV Tests & Trials

EXCLUSIVE! NFV Interop Evaluation Results

The Results Matrix
The results matrix is the heart of this report. It confirms successful test combinations of NFVis and VNFs. For a successful test, all mandatory (critical) test cases had to be executed and passed:

  • VNF package on-boarding
  • VNF instantiation
  • VNF forceful and graceful termination
  • VNF package deletion

In addition, there were a number of ancillary test cases -- multiple instance co-existence, modification of vSwitch parameters during VNF operation -- whose results are not reported in this matrix but which are discussed in the next section (Interoperability Findings and Challenges).

Given the large number of potential test combinations (4 x 17 = 68) and the limited testing time of only two months, EANTC did not aim for a full-mesh test campaign as we knew it would not be possible to complete. We aimed to get two test combinations per VNF completed, which worked out in some cases but not in others, often due to integration, configuration or troubleshooting delays (or simply lack of EANTC test slots). Any cell that is left empty in the matrix below relates in most all cases to a combination that was not tested. Fundamental interoperability issues are reported in the next section.

In total we planned for 39 test combinations, of which 25 passed, six finally failed, seven were postponed due to configuration or support issues or troubleshooting, and one was still in progress at the time of the report deadline.

Of the 39 tests, 25 passed (64%)
Of the 39 tests, 25 passed (64%)

The usual question at this point is: Why does EANTC not report failed test combinations individually? Well, that wouldn't serve our target audience for two key reasons: First, vendors that took the effort to participate should not be punished -- and they will all go back to their labs and immediately improve the implementations with the goal of coming back to pass the tests next time. The vendors that should be called out are those that do not invest in multivendor interoperability. As a reader, feel free to inquire with the companies that opted not to participate in this (or the next) round of testing.

The second reason for not publishing failures is that we want to conduct tests using the most advanced implementations. Experience has shown that vendors bring only their most proven, legacy solutions if there is a plan to publicly identify any failures. Obviously that would be counter-productive and not make any sense for the industry, specifically not in an NFV context. Our tests can only help the industry to progress if we are testing with the very latest implementations: That's why we need to safeguard the testing environment with non-disclosure agreements, making sure failures are permissible and encouraged, because failures help vendors and the industry to progress.

Participant vendors are permitted to review the test report to improve its quality and accuracy, but they are not permitted to request selective publication of specific results only. Of course vendors can ask to opt out of the results publication completely, in which case their name would be mentioned without products or results associated.

All VNF vendors were asked to select NFVis to test with at the beginning of the test campaign, based on a first-come, first-serve approach. Vendors that signed their contracts and assembled their support teams early had an advantage. Some combinations had been assigned by EANTC, disregarding vendor preferences, to balance the test bed.

We decided not to test interoperability of a VNF with an NFVi from the same vendor: While there is some merit to completing the table, we would expect that there will be a much lower likelihood of interoperability issues when testing solutions from a single vendor. We declined any such test requests; the associating cells are marked as N/A.

Due to shipping logistics, Huawei's FusionSphere was available for only 40% of the testing time, which explains the small number of combinations tested.

Next page: Interoperability Findings and Challenges

Previous Page
4 of 6
Next Page
muzza2011 12/10/2015 | 2:39:46 PM
Network now needs a permanent Proof of Concept lab We're at the 'art of the possible' stage, rather than the 'start of the probable' regarding live deployment... which if taken at face value, should wholly sidestep the heinous blunder of productionising a nine 5's infrastructure in for a five 9's expectation.

As much as the IT proposition to a user involves all layers of the ISO stack, if you screw the network you've lost the farm, which then negates any major investment in state of the art delivery systems up the stack.

Any tech worth their weight will crash and burn this stuff on Proof of Concept lab... and weld all the fork doors closed... as it'll be their 'nads on the line should if fail in production, not any IT upper heirarchy who decided to make an (unwise) executive decision.

Fail means *anything* regardless whether its a SNAFU or an hack-attack vector that no-one tested. 

In short, test to destruction, armour-plate the resultant design, and always have a PoC lab bubbling away in the background for the next iteration, because the genie is now out of the bottle and won't go back in.
mhhf1ve 12/8/2015 | 8:56:52 PM
Open source doesn't necessarily mean any support available... It's always interesting to see how open source platforms still have gaping holes with major unsupported functions. Every fork/flavor of an open source project means more than a few potential gaps for interoperability with other forks.

 
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE