x

Virtual Benchmarking

5:05 PM -- I have spent a lot of time running benchmarks on wireless LANs, but almost always using small configurations. I once ran a test using 10 APs and 50 clients; just finding a suitable building to house all of this was a challenge. It took about three days to set everything up, verify the configurations, and get the results. We used IxChariot for running the tests, which is not inexpensive, easy to use, or optimized for testing wireless LANs. And, of course, we got results only for one static configuration in one venue with whatever RF interference was present at the time of the test.

This experience led to an idea -- could we utilize electronic test equipment, of the type used by product engineers, to run large-scale benchmark tests in an isolated, repeatable environment? And would the results of such tests be indicative of what we would see in the real world?

Well, I finally did the first series of tests along these lines, and the results can be seen in the Farpoint Group Technical Note available here. We tested Aruba Networks Inc. (Nasdaq: ARUN) vs Meru Networks Inc. in a number of dimensions, and the real-world results showed an excellent correlation to those obtained from the test equipment, in this case a VeriWave Inc. WaveTest 90. While this first study is hardly definitive, I’m encouraged enough to dub the concept of testing in isolation virtual benchmarking. And I think enterprises will eventually use this technique for head-to-head evaluations in place of real-world testing. Sure, the equipment involved needs to be useable by mere mortals, and the RF channel modeling needs to be more robust, especially as MIMO and .11n take center stage. But the potential savings in time and hard currency is more than inviting.

And, in case you’re wondering, Aruba kicked butt in the tests. Those guys continue to amaze me.

— Craig Mathias is Principal Analyst at the Farpoint Group , an advisory firm specializing in wireless communications and mobile computing. Special to Unstrung

lrmobile_djthomas 12/5/2012 | 3:34:45 AM
re: Virtual Benchmarking
Dear Mr. Mathias,

I have been an avid follower of yours for a while now, attending nearly all of the talks you give when I get to attend conferences. So, I was eager to read your note attached to this posting. But I think you need to explain yourself here.

This is clearly not your best work. It may be your worst. While I read, I was excited to read that you were able to test Meru Networks, a company who I have heard a lot about but never had a chance to look at in person. And, so I dug in to your results with great pleasure.

But I saw that, on a single AP, single client test, Meru Networks got less than 10 megabits, whereas Aruba got 22, for 802.11g. I've seen products not do well before, but at 9 megabits per second, you should have stopped your tests and ask Meru if the AP is not configured right. I can believe it if some Meru results were worse than Aruba's, but not all of them. Meru publishes much different numbers, as did David from Network World--more in line with over 20 megabits per second.

The rest of the test results show the same thing. Meru's resuls are consistantly less than I've have ever heard about their product, by a wide margin. I don't think the test was done right.

But it did ring a bell. I got from an associate some test results that Aruba has been sharing recently about Meru. It shows the same test results. And I recall that you said before that you were borrowing Meru and Aruba gear from someone.

Did Aruba pay you for this test note? Did they arrange for the equipment to be borrowed? Were they present at the tests? Was Meru present at the tests? How do we know nothing was tampered with? It sure could have been.

It's unethical of you to have borrowed the equipment, rather than speak to the two companies themselves.

I just can't trust the results when they say that one major vendor's equipment is nearly 2.5 times better than the other's at the most basic setup. It's just too suspicious.

Now, in the end, I don't really care, because I bring all equipment in house for testing before I make my recommendations, and I won't screw up. But, it seems you did screw up, and I think you owe an explanation to those who respect you.

With regret,

Dave
wlanner 12/5/2012 | 3:34:44 AM
re: Virtual Benchmarking I wonder why Meru has never participated in a public review.... I guess we know why now. They are a one-trick pony "voice, voice, voice". Which is fine, but if they lose at voice, they have nothing to fall back on.

Regardless of Farpoint or Network World, both results are bad for Meru (and as pointed out in the other post, look at Network World's 1 AP test - Meru only comes up at the larger packet sizes).

And don't feel bad for themm, they'll show a Tolly report that has clearly been rigged to demonstrate it has the best specs since sliced bread, so they play the game too.

BTW - Not to think I'm in Aruba's camp, Veriwave is basically a department in Aruba. They help craft the tests and make sure of their results, whether the tests are real-world or not. But, at least they know what the results will be before they do a public review, better than what Meru did.

wirelessfreak 12/5/2012 | 3:34:44 AM
re: Virtual Benchmarking Could it be that Farpoint tested the Meru AP200 and Network World tested the AP150?

Also, the Farpoint numbers are the average of all the packet sizes whereas the Network World test shows three different packet sizes.

I would guess that Meru, as with any vendor, publishes their best numbers (larger packet sizes).

Given that voice packets are usually smaller it is interesting to see, from Network World results, that the Meru numbers are so low.

I expect and hope to see each vendor publishing their own version of results based on Farpoint and/or Network World parameters to prove that they too can scale. Hopefully this means we all will have a little bit better common language for comparing gear.

And as Gabriel suggests in another thread - it would be interesting to see how the vendors/solutions integrate and align with various application providers (e.g. integration with various client remediation/NAC solutions would be very interesting)
farpoint 12/5/2012 | 3:34:43 AM
re: Virtual Benchmarking First of all, thank you for the note. You raise a lot of points, so let me be brief:

- I stand by the results. The configuration of the equipment was verified before I began. You may have heard about other results, but the ones I obtained are from direct testing as documented in the report.

- Aruba may be talking about my results because I make them publically available. All of our reports have what I call an "open copyright", in that they are freely distributed and may be freely re-distrbuted. We retain ownership, but anyone can use them in any way they wish. I do not monitor where they are posted or how they are otherwise used.

- The primary purpose of the test was to compare real vs. virtual benchmarking techniques, and to determine if a benchmarking approach based on test equipment is valid. I think it likely is, as a result of this work.

- I also though that using more than one vendor's equipment would be valuable here. I happened to have access to both Aruba and Meru gear. I have used both in the past - with similar results.

- We do not publish our client list. But I must make this abosolutely clear: our opinion is not for sale. I have no financial or other interest in Aruba or Meru (nor VeriWave, for that matter). We are on retainer with no one, and have no long-term agreements with anyone. We own stock in no wireless firms, except perhaps through mutual funds or other managed accounts. In short, we are as independent as we can be. I have no vested interest in which firm or product performs best.

- I don't think borrowing equipment is unethical. All to often, we suspect equipment received directly from vendors has been optimized in some way. We prefer to test equipment already used in a production environment.

It's not at all unusual to receive messages such as your after such a test. I accept this reality as part of the landscape of doing this kind of work. But it's pretty hard to fake benchmark results, especially when testing two different systems under two different sets of conditions.

BTW - the Network World tests used different Meru APs which are pre-production models, so I obviously couldn't use those. I have little doubt that both Aruba and Meru will continue to improve their products, and I believe work such as that documented in the Tech Note will help both vendors in the future, as well as enterprise users - and that is the ultimate goal of our work here at Farpoint Group.

Again, thank you for the note.

Craig.
lrmobile_strungup 12/5/2012 | 3:34:43 AM
re: Virtual Benchmarking Farpointwned!
lrmobile_djthomas 12/5/2012 | 3:34:15 AM
re: Virtual Benchmarking Craig,

Thank you very much for your reply. I really do appreciate that you read these messages, and I think that speaks very well.

I apologize for being so forward with the first message, but do let me tell you why I think you need to take a second look at your test.

I didn't know much about Meru, and I have never had a chance to play with it. And because I never have, I probably am also skeptical. But, I did do a little bit of research for this message.

First, I do have to take strong issue with your idea that you can safely borrow gear and perform real tests with it. Perhaps that was true in the switching days, because those boxes used to perform very few functions. But wireless is far more complicated, and there are many ways to set up a device so that it works, but not well, and that are all operator error. For example, you could set up one AP in 802.11bg mode, but with only the lower data rates supported. And you could set the other up with only 54Mbps supported, in 802.11g-only mode. How can you testify that you set the boxes up correctly?

Moreover, saying that you set them up with defaults is not an excuse. I could easily imagine that one product's defaults might be for best backwards-compatability, and sacrifice performance initially, while another product's might be set up for maximum performance, backwards-compatability be damned. Because no wireless product is plug-and-play, and all of them require some administration, whether that is just setting up addresses and SSIDs, I don't think that's slightly fair.

Finally, I did some digging about VeriWave. They are apparantly a cabled RF testing solution, meaning that you unscrew an antenna and stick the AP directly into the RF path of the test device. Thus, you _must_ worry about attenuation and oversaturating the radios, as anyone who worked with cabled RF knows about. Now, you probably didn't back the power off the transmitters, and you would need to back them off significantly.

And, so I thought about it, and I think that may have been your problem. I have no way of knowing for sure, but I do have years of RF experience behind me, especially from my defense technology days. I would bet you any amount of money that no vendor, no Meru, no Aruba, no Cisco, no Trapeze, no Symbol, no Extricom, no one, sells an AP that can only get 9 or 10 megabits per second per AP with one client. But, I did do some research as to what's different between Aruba and Meru. Meru uses this Virtual Cell thing, where they have their transmit powers set to maximum (20dBmW), and use other things to make their APs not interfere. Aruba uses microcell, and would have had their power levels backed off quite a bit...I would reckon 12dBmW but I don't know for sure...to make microcell work out of the box. So, Meru may have saturated VeriWave's radios in a cabled environment...or even a nearfield antenna environment..and Aruba's would not have. But, you could just make a simple change to either box to make them trade places. And, that saturation, which is a valid and expected part of the 802.11 RF design, would explain the results you saw, which are so uniformly bad.

Now, I'm speculating. But, can you assure me and us that you did check for this possiblity? It would explain the difference, it would be required to be checked for anyway, and it is far more likely than any vendor selling bad APs. I don't think you can make this assurance.

So, here's why I am tough on you. What if you are wrong? What if you made a major oversight? Then what good are you doing me and your other readers/listeners when you stake your reputation on an article based on borrowed gear and "do it yourself" setups, using testing gear that you knew you didn't understand and admitted to it? Maybe we learned a lot more about the test tool...perhaps Aruba optimized their product for a bad test tool and Meru didn't, or perhaps you saturated the receiver...then we did about the end products.

Thanks

Dave
farpoint 12/5/2012 | 3:34:13 AM
re: Virtual Benchmarking No need for an apology. You ask very legitimate questions, and they are more than appropriate given the nature of the test. Two comments:

- First, we use the default settings (often changing only the SSID) because, in our conversations with the equipment vendors over the years, they have always said that they ship their products with settings that are as optimal as possible for the general case. There are also, as you know, hundreds of possible settings and tweaks, depending upon product, and quite literally thousands (if not more) combinations that one could try. Note also that we don't do a lot of tweaking because most enterprise IT departments don't, and I'd be very suspicious of any product that required a visit from the vendor to tweak manually for best performance in a given setting - the system should do fine tuning by itself, and the IT person can follow up with only minimal training and experience.

We weren't trying, however, to optimize each system or produce *the* definitive result (because there may not be one), but rather to compare two systems under two different circumstances, carefully controlling each test. Can Meru yield a better result? Perhaps. But not under the test conditions we defined.

- We did turn the power output on all APs all the way down. I didn't mention this (perhaps I should have) becuase it's a technical fine point and anyone using the Veriwave equipment would learn about this requirement on their own.

Again, the results speak for themselves, and, again, I stand by them. Your mileage might vary, but the test results are what they are in this case. And, again, I have no allegiance to either vendor and my purpose in doing this work was only to show that virtual benchmarking has some real potential and that I think it's worth further study. I remain skeptical of the single-channel deployment model regardless.

And of course I always worry about being wrong. It was two full days of work to produce these results, but only a few minutes of actual run time. We set up, check, double check, run, check the results, and run the process again. Can I make amistake? Of course! And I will most certainly so publish should that occur.

Again, thank you for the note.

Craig.
gowireless 12/5/2012 | 3:34:08 AM
re: Virtual Benchmarking Craig,

Let's be very clear here:

How much did Aruba pay you, or your company, to conduct this test?

Thanks,

Go
gowireless 12/5/2012 | 3:34:08 AM
re: Virtual Benchmarking Everyone, except maybe for Meru, knows that the only way to support the Veriwave test equipment is to do PCF. Nobody would do this in the real world, but Aruba is having fun doing this in test labs with Veriwave, and beating everyone by doing it.

I would love to see an Aruba vs. Meru test with real phones in a real environment, but I guess Aruba wouldn't be willing, since they are having so much fun in a make-believe world.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE