Spirent Rises to Cloud Testing Challenge
Cloud computing and its use of server virtualization in the data center, is posing major new challenges for testing and measuring of service performance that service providers and enterprises alike need to address, according to one major provider of T&M gear.
Spirent Communications plc this week released a cloud computing testing methodology that the company says is able to validate performance, availability, security, and scalability for cloud computing services. Because those services are based on the sharing of physical resources in new ways -- as part of the virtualization process that enables many different virtual servers to share a physical server and corresponding network interfaces -- some fundamental testing needs to be reinstated, says Tim Jefferson, general manager of Spirent's converged core solutions.
"There is a lot of excitement about cloud computing, and the enterprise IT guys are being seduced by its economics," Jefferson notes. "What we are seeing as a test vendor is the things those IT guys took for granted in terms of network performance now are being thrown up in the air again. All the basic fundamentals of network performance are now back in play."
Instead of the rock-solid performance of physical switches and routers, established over years of testing and standardization around Layer 2 and Layer 3 performance metrics, enterprises today are monitoring the performance of applications on "invisible virtual switches, which are using the same shared resources as other virtual machines," says Jefferson.
In cloud computing testing done with the European Advanced Networking Test Center AG (EANTC) , Spirent tested the performance of a wide range of public cloud service providers, including Amazon.com Inc. (Nasdaq: AMZN), BlueLock LLC , Hosting.com , GoGrid Cloud Hosting , MaximumASP LLC , and Rackspace .
What the tests showed was that the services were not comparable in performance or price, and while some struggled with scalability, showing a high number of unsuccessful HTTPs transactions (Amazon and Rackspace), others with better scalability had much poorer response times (BlueLock and Hosting.com).
Either problem is a serious concern to a cloud service, Jefferson says. Obviously, introducing greater errors as a service scales doesn't work, but slowing the response time has equally unacceptable consequences for banks, financial institutions, and other enterprises with mission-critical applications.
What's of greater concern, in Jefferson's view, is that test results varied widely with any change in conditions and were difficult to predict. Since service providers such as AT&T Inc. (NYSE: T), Verizon Enterprise Solutions , Global Crossing (Nasdaq: GLBC), and others are moving aggressively into cloud computing services, they are going to be confronting challenges similar to those faced by the companies tested by Spirent and the EANTC, he predicts.
"When you test today, for Layer 2 or Layer 3 network performance, you can get the same results based on the same conditions. In virtual environments, you get wildly different conditions, because of the shared resources issues."
What Spirent is offering in its holistic test methodology for cloud computing is a test of what it calls "PASS," for performance, availability, security, and scalability.
Spirent's tools are aimed at both enterprises and service providers, and the firm conducted its tests with a focus on the needs of both, according to Jefferson. Service providers need to be able to build service-level agreements around cloud computing performance metrics, he says, and thus need to be able to insure the performance of their services during peak times.
"We think these are the only tools to give them the ability to diagnose where the problem is at, when the switch, the firewall, the machine are all virtual. We have designed the tools to go deeper and diagnose precisely what the problem is."
— Carol Wilson, Chief Editor, Events, Light Reading