x

What Analysts Do

3:15 PM -- Some of the feedback I’ve received, both directly and indirectly, from some of the firms whose products we evaluated in our “draft n” benchmark tests has motivated me to state what should be obvious, but, sadly, is not always today. Analysts like me are experts in a particular area of technology; we live and breathe our particular subjects with a passion that’s hard to explain to anyone who’s not a total nerd (a nerd, by the way, is a geek with at least some social skills; that’s me). We’re experts because we’ve made every conceivable mistake in our chosen areas and learned at least enough not to make them again.

The tests I recently ran illustrated, to me, anyway, that the so-called “draft n” 802.11 products I tested didn’t perform as well as at least one non-draft-complaint MIMO-based product that’s been on the market for a while. (See Draft What?) They also gave me the opportunity to rant once again against the whole “draft n” concept, which I consider to be misleading at best and otherwise just plain wrong. I think I’ve explained my reasoning in the statements in the report, and I won’t cover all that again here. Anyway, that’s analysis.

But I do, sadly, need to state here unequivocally that the opinions in anything I publish are mine and mine alone. No ethical analyst (read: no analyst worthy of being one) would tailor results from experiments in any way that deviates from the reality revealed in a given benchmark or other quantifiable exercise (i.e., the truth). I occasionally read about scientific fraud in a variety of fields. One example of years ago hit very close to home. A doctor who treated me in high school, a world-famous dermatologist, claimed to have solved the rejection problem in skin transplants, using mice, anyway. What he did, though, was to paint a black patch on a white mouse using a magic marker. Pretty stupid, huh? Especially for an otherwise smart guy (I hope so, anyway, said the patient).

As you’ve seen, I carefully document the test conditions and results of any benchmark project, and use techniques like turntables and spectrum analyzers that other people usually don’t. I welcome criticism of the process and the results, and will provide assistance at no charge to anyone attempting similar tests. But I will not put up with sour-grapes criticism of my motivations or ethics, at least not without proof. No ethical person would.

— Craig Mathias is Principal Analyst at the Farpoint Group , an advisory firm specializing in wireless communications and mobile computing. Special to Unstrung

Matus 12/5/2012 | 3:55:11 AM
re: What Analysts Do Hello Craig,

I read your technical note on MIMO benchmarking. Overall, I totally agree with your conclusions but I came across several points I cannot agree with:

1) Revolving turntables to combat fading - I don't think this helps at all and in addition it introduces more variables into the environment. Fading will be still present and even with higher variation. And if you take into account, that basically MIMO works well when "the conditions are bad" - i.e. fading is present what creates independent paths, and a dominant LOS path degrades its performance, it would be counterproductive to try to combat fading.

2) PC power settings - if you state "we believe" (page 3) in this kind of paper and you don't have a proof (i.e. you did not test the power setting impact), it sounds quite fluffy.

3) Security settings - to have the same conditions, you should have disabled the encryption for all tested products.

Matus
farpoint 12/5/2012 | 3:55:06 AM
re: What Analysts Do First, thanks for your note. WRT your specific questions:

1. Turntables factor out the possibility that a given computer is placed in a suboptimal location WRT antenna orientation for the duration of the test, and another is not. It provides as close to identical test conditions as are possible in this case.

2. Good point; I'm not an expert on Intel/Microsoft power management, but I fall back here to the fact that all products were tested under exactly the same conditions WRT power. There are so many variables in benchmarking, regardless, that a (large) number of configurational possibilities (variables) just can't be tested given budget and time constraints.

3. I don't think any WLAN should be operated with security turned off, ever. WPA, at a minimum, should be required. We couldn't test security in some cases because of problems with a specific product. This should have, if anything, given them an advantage in throughput.

Thanks again for writing.

Craig.
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE