The Results Matrix
The results matrix is the heart of this report. It confirms successful test combinations of NFVis and VNFs. For a successful test, all mandatory (critical) test cases had to be executed and passed:
- VNF package on-boarding
- VNF instantiation
- VNF forceful and graceful termination
- VNF package deletion
In addition, there were a number of ancillary test cases -- multiple instance co-existence, modification of vSwitch parameters during VNF operation -- whose results are not reported in this matrix but which are discussed in the next section (Interoperability Findings and Challenges).
Given the large number of potential test combinations (4 x 17 = 68) and the limited testing time of only two months, EANTC did not aim for a full-mesh test campaign as we knew it would not be possible to complete. We aimed to get two test combinations per VNF completed, which worked out in some cases but not in others, often due to integration, configuration or troubleshooting delays (or simply lack of EANTC test slots). Any cell that is left empty in the matrix below relates in most all cases to a combination that was not tested. Fundamental interoperability issues are reported in the next section.
In total we planned for 39 test combinations, of which 25 passed, six finally failed, seven were postponed due to configuration or support issues or troubleshooting, and one was still in progress at the time of the report deadline.
The usual question at this point is: Why does EANTC not report failed test combinations individually? Well, that wouldn't serve our target audience for two key reasons: First, vendors that took the effort to participate should not be punished -- and they will all go back to their labs and immediately improve the implementations with the goal of coming back to pass the tests next time. The vendors that should be called out are those that do not invest in multivendor interoperability. As a reader, feel free to inquire with the companies that opted not to participate in this (or the next) round of testing.
The second reason for not publishing failures is that we want to conduct tests using the most advanced implementations. Experience has shown that vendors bring only their most proven, legacy solutions if there is a plan to publicly identify any failures. Obviously that would be counter-productive and not make any sense for the industry, specifically not in an NFV context. Our tests can only help the industry to progress if we are testing with the very latest implementations: That's why we need to safeguard the testing environment with non-disclosure agreements, making sure failures are permissible and encouraged, because failures help vendors and the industry to progress.
Participant vendors are permitted to review the test report to improve its quality and accuracy, but they are not permitted to request selective publication of specific results only. Of course vendors can ask to opt out of the results publication completely, in which case their name would be mentioned without products or results associated.
All VNF vendors were asked to select NFVis to test with at the beginning of the test campaign, based on a first-come, first-serve approach. Vendors that signed their contracts and assembled their support teams early had an advantage. Some combinations had been assigned by EANTC, disregarding vendor preferences, to balance the test bed.
We decided not to test interoperability of a VNF with an NFVi from the same vendor: While there is some merit to completing the table, we would expect that there will be a much lower likelihood of interoperability issues when testing solutions from a single vendor. We declined any such test requests; the associating cells are marked as N/A.
Due to shipping logistics, Huawei's FusionSphere was available for only 40% of the testing time, which explains the small number of combinations tested.
Next page: Interoperability Findings and Challenges