So It's Not Just Me Then...So It's Not Just Me Then...
AsiaInfo's Dr Andy Tiller finds he's not the only one with an axe to grind over Gartner's Magic Quadrant processes.
December 21, 2015
A little while ago, I met with Light Reading's Ray Le Maistre and went on the record with my feelings about Gartner's latest Magic Quadrant for Integrated Revenue and Customer Management (IRCM) for CSPs. When I read Ray's resulting article, I could see it was an accurate reflection of our discussion, but I was a bit concerned that others would regard it simply as the sour grapes rant of a bitter man. (See Magic Quadrant or Gartner 'Graft'? )
Fortunately for me, it seems I was wrong to be worried. Since Ray's piece was published, I've received many expressions of support, along with exhortations such as: "At last someone has spoken out." Although few people have gone public with their comments on the Light Reading site, Ray tells me he too has received a number of private reader reactions (email and verbal), universally supportive of AsiaInfo Inc. (Nasdaq: ASIA)'s position.
My rant (and I accept it probably was that) was borne of frustration with Gartner Inc. 's Magic Quadrant (MQ) process. At least in my assessment, the process lacks the rigorous analysis that you would expect from Gartner, especially given the influence that the MQ apparently has in the market. The process, I believe, allows the MQ analysts to make critical judgements without doing the necessary research.
For the past two years I've raised my concerns with Gartner's ombudsman, an internal role that investigates and responds to complaints about the MQ. Having got nowhere with this, I thought I would share some of my proposals to reform the process, along with the ombudsman's responses.
1. Requirement for proactive research
My first suggestion is that the MQ process should require the analysts to engage in proactive research. Currently the process does not require analysts to be pro-active about gathering the information they need before making judgements.
For example, the MQ analysts gave AsiaInfo a negative score for "lacking a comprehensive strategy blueprint." However, this was not something we were able to provide as part of the RFI process, and therefore the analysts have not seen it. The ombudsman's response was that “this is a reference to how a provider has articulated its strategy throughout all of its submitted materials, any briefings it may have had with analysts, and the analysts' knowledge of the market." Since the "submitted materials" for the MQ consist of a simple spreadsheet questionnaire, presumably Gartner relies on briefings to understand each vendor's strategy. This is the heart of the trouble; it is very difficult to get time and attention from the MQ analysts unless you are an important Gartner client or you pay the MQ analysts to provide market strategy consulting.
So my suggestion is that the MQ process should place more responsibility on the analysts to seek out the evidence they need -- if they give credit to vendors they know well (i.e., Gartner clients), then they should check whether other vendors can provide similar evidence and then give them the same credit.
2. Reconsider the weightings applied to different criteria
Based on insights from my exchanges with the ombudsman and the MQ analysts, it seems to me that some of the weighting factors used in the MQ introduce a bias. Comparing AsiaInfo to ZTE, for example, more weight has been given to the number of customer references rather than the significance and the scale of the customers. Similarly, more weight has been given to percentage revenue growth rather than absolute revenue growth.
When I suggested that Gartner should reconsider whether the weightings are fair to all vendors, the ombudsman replied that "the criteria chosen and the weights assigned are functions of end-user client need and not devices to level the playing field for providers." That seems to suggest that Gartner adjusts the weightings to get the results their clients want!
3. Conduct local language reference interviews
Gartner invites each vendor to provide customer references using a 30-minute English language online questionnaire. This provides limited insight, especially from customers who don't speak English. So my suggestion is that Gartner should conduct proper reference interviews and that they should do this in the local language for any countries where English is not routinely used for normal business (e.g., China and Japan). AsiaInfo has more than 50% market share in China but Gartner has never spoken to any of our Chinese customers.
Gartner is persistently reluctant on this point. My sense is that they feel interviewing customer references is just too much like hard (and unpaid) work. The ombudsman's excuse is that "interviews run the risk of introducing interview bias," but I don't see how this could be worse than answering written requests and filling in multiple choice forms.
Use rigorous metrics
The MQ analysis sounds scientific, but many of the metrics used are not rigorous. For example, AsiaInfo ranks low on the "Mindshare" metric, so I asked Gartner to explain how Mindshare is measured. The answer is that they use the online questionnaire to ask reference customers which vendors they considered in their procurement process. Given the length of procurement processes and implementation projects in our industry, these customers will be thinking back perhaps five years or more, which suggests that Gartner's Mindshare metric is long out of date: It couldn't possibly measure the influence of a disruptive vendor breaking into a new market.
Gartner responded that they also get information on vendor Mindshare from discussions with their clients, but it seems to me that this is not only anecdotal but also potentially self-fulfilling: the MQ will presumably influence vendor Mindshare with Gartner clients, after which the MQ analysts then ask these same clients for their views on vendor Mindshare!
I suggested that Gartner either removes unscientific metrics such as Mindshare, or else establishes rigorous methods for measuring these metrics, but the ombudsman simply replied that "the analysts adhered to proper process." That's my point -- the process doesn't require rigorous analysis.
Summing it all up
Let me make it clear that I'm not accusing Gartner of pay-to-play. I don't think you can simply write Gartner a check and pay to be in the Leader Quadrant. I'm suggesting the process is commercially flawed or biased rather than explicitly corrupt. By this I mean that if you become a significant Gartner client your profile with the MQ analysts is increased, your profile with Gartner's operator client base is increased, and you are therefore in a better position to influence the perception of your market stature and performance during the MQ assessment process. Especially if you avail yourself of offers from the MQ team to purchase one of their marketing strategy sessions. In the past two years, my dealings with Gartner and its ombudsman have not caused me to smile very often, but that changed just a week ago. During the most recent discussions on our MQ positioning and our "Mindshare," I was informed that AsiaInfo was marked down because Gartner does not receive many inquiries about our company. So I had to smile when I received a sales email from Gartner saying, and I quote, that "AsiaInfo has been mentioned during a number of end-user inquiries... we would like some time to paint a picture of how we could support you driving growth." Maybe the Light Reading article caused a surge in Gartner inquiries? Anyway, I asked the sales executive to please tell the MQ team about the growth in AsiaInfo inquiries, but I'm not holding my breath for an improvement in our MQ position. Thanks for the messages of support, and please feel free to join me in going public with your own MQ experiences. — Dr Andy Tiller, VP, global marketing, AsisInfo Inc.
About the Author(s)
You May Also Like
SCTE® LiveLearning for Professionals Webinar™ Series: Going to 10G & BeyondJul 26, 2023
Cable Next-Gen Business Services Digital Symposium 2023Jul 26, 2023
Open RAN Evolution Digital Symposium Day 2Jul 26, 2023
SCTE® LiveLearning for Professionals Webinar™ Series: Priming the Pump for Next-Gen PONJul 26, 2023