Juniper Wins Monster Router Test

Juniper Networks Inc. (Nasdaq: JNPR) has defeated its number one rival, Cisco Systems Inc. (Nasdaq: CSCO), in the first multivendor test of Internet core routers.

The independent evaluation, which was commissioned by Light Reading and took six months to complete, proves that Juniper’s M160 platform is currently superior to Cisco's latest 12416 product in three key performance areas: IP (Internet protocol), MPLS (multiprotocol label switching), and OC192 (10 Gbit/s). The vendors’ products are evenly matched in the performance of their OC48 (2.5 Gbit/s) interfaces. (Click here to view the complete report).

The tests were performed for Light Reading by Network Test Inc., an independent benchmarking and network design consultancy. “In some areas Juniper’s M160 is in a class by itself,” says David Newman, president of Network Test.

His report on the test results concludes that:

“[The M160] holds more BGP (border gateway protocol) routes and more MPLS label-switched paths than any other box. It deals with network instability far better. And it exhibits much lower average latency -– the amount of delay a router introduces -– and latency variation.”

Despite losing to Juniper in three out of four overall areas, the test results also contained good news for Cisco. For example, the data demonstrates that its OC192 interfaces not only exist -– but can also process a torrent of data at line rate. Indeed, Cisco’s 12416 turned in the highest single data rate achieved in the entire test: more than 271 million packets per second.

With improvements, Cisco’s router could represent serious competition for Juniper. “Cisco has served notice that it’s no longer the easy target that allowed Juniper to gain 30 percent share in just a few years,” says Network Test’s Newman.

In contrast, the test results turned in by the other two vendors tested -- Charlotte’s Networks Inc. and Foundry Networks Inc. (Nasdaq: FDRY) -- were all cloud, no silver lining. Charlotte’s Networks’ Aranea-1 fumbled packets at every level of loading offered – including one percent. Foundry’s Netiron pretty much gave up the ghost in the flapping and convergence test (which might explain why, since the tests, the company has announced that it will withdraw from the core router market).

Results such as these will do little to encourage the belief that other vendors will be able to do anything to weaken Cisco and Juniper’s stranglehold on the market for Internet core routers for the foreseeable future. (Avici, which places a distant third to the two vendors in terms of market share, failed to show up for the test.)

In an interview last year, Scott Kriens, CEO of Juniper, made the following statement to Light Reading: “Service providers already have two credible sources for high-speed routers: Cisco and Juniper. The market has yet to demonstrate that it wants a third.” On the basis of the Light Reading test results, service providers couldn’t have a third source even if they wanted it.

The diagnostic equipment used in the test was manufactured by Spirent Communications. The equipment, worth $2.6 million, was used to evaluate routers from the four vendors worth a combined total of $29 million .

This is the first time that the networking industry has known for a fact which company had the better product. Until now, service providers and other customers have largely had to rely on vendor-sponsored tests, marketing materials, and hearsay when analyzing core router products.

Light Reading’s test represents a number of other significant firsts:

  • The first multivendor test of core routers
  • The first test of 10-Gbit/s OC192 router interfaces
  • The first time that Cisco agreed to let any of its gear be evaluated in an independent public test

    All of the test results are being published on Light Reading's new Web site, Light Testing (www.lightreading.com/testing), which is being launched today. Light Testing will host the results of a string of tests being planned by Light Reading on leading-edge optical networking equipment and services

    -- Stephen Saunders, U.S. editor, Light Reading http://www.lightreading.com
  • Page 1 / 19   >   >>
    dnewman 12/4/2012 | 8:38:13 PM
    re: Juniper Wins Monster Router Test Thanks, Mark. If you're not bound by NDA, please feel free to post what the vendors tell you. As you're almost surely aware, there's been some, er, interesting spinning that's taken place, and I'm curious to hear what vendors are telling prospective customers about this project.

    David Newman
    Network Test
    dnewman 12/4/2012 | 8:38:29 PM
    re: Juniper Wins Monster Router Test Hi Mu-law,

    Thanks for your complimentary words about the test.

    I'd like to respond to your comments:

    --the statement about packet loss being cumulative was flat-out wrong. In fact, even if the traffic traversed 50 routers in succession, loss would never be more than 1 percent of offered load.

    Some other readers have pointed this out, and LR has kindly deleted the erroneous statement.

    --"Transit latency for a size mix isn't meaningful unless the weighted average for the mix is equal to the latency for a stream of packets having that mean size."

    This is a really interesting concept, and in the future I'd be glad to conduct measurements of the mean. By the way, the Smartbits does put a timestamp in each packet, so it's easy enough to make a measurement like this.

    However, I'm not sure I understand the need to determine whether there's a delta between mean packet size latency and "nonlinearity," as you call it. Why the mean? Why not the median? And at the end of the day, why isn't a latency measurement for a *mix* of packet sizes a meaningful predictor of device behavior in production nets?

    --"At the interface speeds we're dealing with here, it is profoundly unlikely that successive reordering events would affect the same flow"

    Depends on the flow distribution, doesn't it? Cisco is fond of "proving" that Juniper's OC192 card is no good by pumping a single flow of TCP traffic at some insanely high rate. Such a test will result in misordering; but whether that result is relevant to any production network is another question altogether.

    Then again, as I stated in article, it would be much simpler if this card just didn't reorder to begin with.

    --"Since such a small proportion of packets is affected, it should be very unlikely that these affected packets will be affected again by a subsequent router"

    Yes, fully agree. Saying the OC192 card has an equal likelihood of three possible actions was an unfortunate choice of words on my part. It has a choice of three possible actions, but their probabilities are almost certainly very, very different.

    Another post got it just about right in saying this was like crashing your car once, and then crashing it a second time. It is possible the second collision will undo the damage of the first -- just not very likely.

    Thanks again for your comments.

    David Newman
    Network Test

    mcollett 12/4/2012 | 8:38:29 PM
    re: Juniper Wins Monster Router Test David,

    Thanks for the prompt reply... I should have
    followed all the links. *I'll* be asking one
    or two of those vendors that question directly,
    since I've had some of their presentations
    (notably the one that "uses Juniper to front end
    their router")


    mu-law 12/4/2012 | 8:38:30 PM
    re: Juniper Wins Monster Router Test This series was surprisingly meaningful and reasonable, among the best I have ever seen from the press or commercial testers. The attention given to the importance of non-zero loss is commendable; it is a critical and often overlooked detail. It would be nice to see this same level of care brought to bear on another test of say, queuing treatment in hardware routers.

    That said, I'd like to offer some commentary on a few issues:

    "Worse, it's cumulative. A network comprising 50 routers from Vendor X, each dropping 1 percent of traffic, will experience loss of at least 50 percent."

    This is only true for those flows that would traverse the string of all 50 in succession. Real networks aren't designed this way (and neither is your test) so this commentary is marginally misleading and doesn't seem to be relevant. If your test were built from a dense mesh of these same 50 nodes (rather than a "string") loss would be 50x, but volume would be 25x, so the net would be 2%. Regardless, whether the number is 1, 2 or 50, it is nonzero, which is unacceptable, except for handling of best-effort traffic.

    "We expected Imix to produce higher latency readings, since it takes more time to forward a long packet than a short one."

    Transit latency for a size mix isn't meaningful unless the weighted average for the mix is equal to the latency for a stream of packets having that mean size. Otherwise, you are just identifying nonlinearities rather than measuring something. Identifying these nonlinearities are important, however because they are the understanding whether the results of single size latency tests can be applied to other non-uniform aggregates.

    In the future, I would suggest including a test that produces a high-resolution histogram of latency/size figures derived from a single mix. I suspect that unless Netcom / et al can stuff a timestamp into a payload, this may be a difficult test to effect at high speeds.

    "It's true that two reordered packets may have an impact on any one
    connection only if both packets belong to that connection. But it's equally
    possible that reordered packets belonging to two different connections
    may have an impact on both."

    At the interface speeds we're dealing with here, it is profoundly unlikely that successive reordering events would affect the same flow, where these interfaces are far from the edge, and the aggregates are well homogenized.

    TCP, which is the transport used for the best-effort traffic that predominates commodity internet service today (read web) is designed to address reordering; these events are no more significant than loss, (and are possibly less) often requiring retransmission, and sometimes not.

    These random ordering events can be absolutely fatal to real-time interactive media applications with rigid delay bounds (read voice); unless the envelope in which these events occur is well defined and predictable, systems that exhibit this behavior are useless for these highly desirable / marketable / profitable / interesting applications.

    "Multiple interfaces will not have a cumulative effective on packet reordering. False. If one Juniper OC192 card scrambles some packets, a second OC192 interface has an equal likelihood of correcting the reordering; scrambling
    the packets further; or making no change. Thus, the impact of multiple OC192s is neither additive nor subtractive."

    Since such a small proportion of packets is affected, it should be very unlikely that these affected packets will be affected again by a subsequent router, unless the content of the a particular packet affects its susceptibility to this effect. In fact, it is most likely that the LARGE balance of unaffected packets would be affected in each subsequent router, in lieu of the already affected packets. In short, for each successive step, making no change = most likely, scrambling = every so often, correcting = most unlikely.
    dnewman 12/4/2012 | 8:38:31 PM
    re: Juniper Wins Monster Router Test Hi Mark,

    Thanks for your inquiry. There's a link on the main testing page called "no shows" that lists all the invitees. Here's the URL:


    David Newman
    Network Test
    mcollett 12/4/2012 | 8:38:52 PM
    re: Juniper Wins Monster Router Test Early in the router test article, there was a
    mention that 11 companies were asked to
    participate, but only the 4 that LR tested took
    up the challenge... so what were the other
    companies, and did LR have any particular
    product of that company they were expecting to

    Thanks for your time.


    dnewman 12/4/2012 | 8:41:28 PM
    re: Juniper Wins Monster Router Test Ah, but I asked for your source of this misinformation.

    Who was it? Is there someone at Cisco you can point me to, so I can ask him/her to cut it out?


    David Newman
    Network Test
    dnewman 12/4/2012 | 8:41:28 PM
    re: Juniper Wins Monster Router Test Is this a troll?

    Cisco's 12416 didn't suck, nor did Juniper's M160 squash it.

    While I've been encouraging folks to draw their own conclusions based on test results, I don't see how any possible reading of the results and article would lead to that conclusion.

    David Newman
    Network Test
    Telecom_Guy 12/4/2012 | 8:41:40 PM
    re: Juniper Wins Monster Router Test Juniper squashed Cisco in the core routers. No need to defend or try to justify why Cisco lost. They always try to do that. What they need to do now is work on improvement, no in making excuses why they sucked.
    AllenC 12/4/2012 | 8:41:42 PM
    re: Juniper Wins Monster Router Test Choosing test groupings as you have... gives a
    weight to the group... and impies that you feel
    it has equal importance to other groups.
    Page 1 / 19   >   >>
    Sign In