& cplSiteName &
Comments
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 7 / 10   >   >>
seen_the_light
seen_the_light
12/5/2012 | 2:44:09 AM
re: Nortel/Avici: Getting Together?

wanna bet we see a bunch of avici pos at verizon in next few years? bam.
Tony Li
Tony Li
12/5/2012 | 2:44:08 AM
re: Nortel/Avici: Getting Together?

Stomper,

I thought that we were discussing fabrics that were built only from nodes (chassis or card) that contain both revenue interfaces and fabric interconnect. Topologically, there is no difference. The architectural arguments hold regardless of the scale.

As you note, in some topologies, speedup is required to scale with the topology. Yes, you can play games, but any fabric that requires that speedup scale as any function of scale is bound to become a scalability constraint.

Tony
Tony Li
Tony Li
12/5/2012 | 2:44:05 AM
re: Nortel/Avici: Getting Together?

Stomper,

You miss my point still. If you go to heterogeneous systems with independent switching nodes, you can construct your line card (or chassis) without such excessive speedup or expense.

Tony
stomper
stomper
12/5/2012 | 2:44:05 AM
re: Nortel/Avici: Getting Together?
> I thought that we were discussing fabrics that
> were built only from nodes (chassis or card) that
> contain both revenue interfaces and fabric
> interconnect. Topologically, there is no
> difference. The architectural arguments hold
> regardless of the scale.

That's what I was referring to.
A direct network that would contain nodes that
held switch elements and processing (revenue
generating ports).

> As you note, in some topologies, speedup is
> required to scale with the topology. Yes, you
> can play games, but any fabric that requires
> that speedup scale as any function of scale is
> bound to become a scalability constraint.

I'm sorry, I wasn't clear.
Some architectures require a basic speedup to
maintain an intended rate based upon switch
characteristics (fragmentation loss, etc.).
Further speedup is usually desirable because
in a router you can have cases where multiple
sources gang up on a destination, and you might
want to drain the data from the fabric faster
than line rate and let QoS function on it.
So let's call the minimum speedup prior to
degradation of operation X, being some small
factor over line rate, maybe 2L.

Now you need additional speedup solely for
scaling. Again, the nice thing about using a
multi-dimensional interconnect is that the
speedup requirement here is much less than linear
with scale. Pluris, Hyperchip, Avici
(Chairo?), all chose multi-dimensional topologies
for this reason. You can find papers on Stanford's
web site detailing how the fabric BW requirement
scales for several popular topologies. For 3-4D
topologies, the speedup factor necessary to scale
to hundreds and maybe a few thousand nodes is
around 4-5L. If L is 10G, and today you can get
dozens of 6+G SERDES on silicon, you only need
about 50G/6G wires per connect.

-S
willrouteforfood
willrouteforfood
12/5/2012 | 2:43:58 AM
re: Nortel/Avici: Getting Together?
What about this?

http://www.juniper.net/techpub...

http://www.cisco.com/en/US/pro...

Do you mean Avici's implementation is better? Don't they use a mirroring technique? (as in providing the risk of mirroring the same software bugs across their processors)? I must admit I am not well versed with it. Cisco is not hitless yet, but it is pretty good. I have seen Juniper's perform in a hitless manner, although the conditions may have been right.

WRFF
stomper
stomper
12/5/2012 | 2:43:58 AM
re: Nortel/Avici: Getting Together?
Tony,

I believe I understand your point.
I don't think I have gotten mine across very well,
I'm getting old and I tend to ramble. Let me try
again more succinctly.

My point: with the line + switch card architecture, you can optimize switching for a single chasis and push off the scalability
problem to an external switch (maybe).
However, for redundancy, you need 2X switch
cards.

This is a significant cost. Further, this
architecture dosen't even scale well in the
chasis. You have to buy the switch cards up
front.

For a multi-dim fabric that can naturally scale
beyond a chasis, the cost is an up front speedup
of about 2X beyond that of the line/switch
architecture. A possible feature of this scenario
that some have exploited are the multiple
paths between source and destination nodes,
which can be utilized for redundancy.

It may be debatable which is more costly, 2X
switch cards or 2X BW. But I think there are
enough other advantages to the multi-dim fabric
that all other things being equal it would be
preferable. Of course, all other things are never
equal ...

-S
willrouteforfood
willrouteforfood
12/5/2012 | 2:43:57 AM
re: Nortel/Avici: Getting Together?
Marionette,
What about this?

http://www.juniper.net/techpub...

http://www.cisco.com/en/US/pro...

Do you mean Avici's implementation is better? Don't they use a mirroring technique? (as in providing the risk of mirroring the same software bugs across their processors)? I must admit I am not well versed with it. Cisco is not hitless yet, but it is pretty good. I have seen Juniper's perform in a hitless manner, although the conditions may have been right.

WRFF
gotman
gotman
12/5/2012 | 2:43:55 AM
re: Nortel/Avici: Getting Together?
WRFF

Cisco is hitless after 22.S for ISIS, OSPF and BGP for the 12k. They use SSO & NSF. I don't think they are hitless for MPLS and MP-BGP yet. I think that is in trails.

gm
willrouteforfood
willrouteforfood
12/5/2012 | 2:43:54 AM
re: Nortel/Avici: Getting Together?
Thanks for the info, Gotman! Much appreciated.

WRFF
Tony Li
Tony Li
12/5/2012 | 2:43:50 AM
re: Nortel/Avici: Getting Together?

Stomper,

Point taken. We are making progress.

I'm not sure that I agree with you however. I did scan the Stanford stuff that Nick has done and it only reinforces my points. Are you thinking of something else?

I'd also like to point out that for density, the 2x switch cards are clearly superior, based on available product.

Tony
<<   <   Page 7 / 10   >   >>


Featured Video
Upcoming Live Events
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events