You cannot test everything. The test matrix of every protocol, every speed, every combination of users and every security feature -- one at a time -- would take forever. Most products undergo testing of each of their critical features in isolation, not in combinations. And when they test interfaces between their products to others, they only verify the interface functionality.
There are dozens, upon dozens, upon dozens of network security tools, monitoring/analytics tools, performance optimization tools and more. Every enterprise network is different in what tools they integrate and how they are configured, but they all face the same issue of complexity. And the increasing introduction of new devices into corporate networks only exacerbates the problem.
With this in mind, there is no substitute for testing under the most realistic conditions. Testing under nominal load conditions without including a diverse set of the latest threat conditions can lead to vulnerabilities. And a downed network is a costly one. The cost of unplanned data center outages caused by a choke point or faulty device can cost nearly $9,000 a minute.
So how do we find the problem and fix it quickly?
The importance of testing
Network architects spend a lot of time planning before implementing. They design the perfect deployment. But the reality is that it's impossible to know if a point of failure or crippling attack is just around the corner. This is especially true for networks that have evolved into an assortment of technologies old and new, as organizations incrementally modernize their infrastructures.
The smart architects will build a lab staging environment where they can test the end-to-end design. But many are not going as far as they should. They test under average load conditions, or simulate a few sets of basic attacks, pass and then claim victory. To ensure resiliency and security down the road, it takes a wide range of realistic attack flows: It takes emulating large-scale DDoS attacks, or the latest malware, and mixing that in with HTTP traffic, packetized voice traffic and streaming traffic. The more realistic your load conditions, the more confidence you will have in your results.
Fortunately, running these kinds of tests is a lot simpler than you might think.
How network equipment manufacturers find their breaking point
All network equipment manufactures test comprehensively. That does not mean that you should ignore it in your architecture, since you will often be integrating their equipment in ways they may not have even envisioned. You should test yourself and ask yourself the same questions they do:
Enterprises have access to the same test resources the network equipment manufactures use. It is a matter of scaling them properly. Learn how to leverage the same resources they do and make sure your architectures are resilient.
Learning from the results
Ultimately, more realistic testing will result in better standard IT operations and, in turn, lead to more efficient operations and network resiliency (read: lower opex). Cyber attacks and IT problems can happen to any business, and a lot of it is out of your control. But you can prepare. You can test, find weak spots and have a plan to address related issues as they arise. You may not be able to test every condition and scenario, but testing under realistic conditions is a good start. The network landscape is constantly changing -- from new equipment additions to updates and patches to existing equipment -- so testing using real-world scenarios has become more important than ever.
— Jeff Harris, VP, Solutions Marketing, Ixia
CALLING ALL TEST, ASSURANCE AND MONITORING COMPANIES: Make sure your company and services are listed free of charge at Testapedia, the comprehensive set of searchable databases covering the companies, products, industry organizations and people that are directly involved in defining and shaping the telecom test and measurement industry.