& cplSiteName &
Comments
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 6 / 10   >   >>
Sisyphus
Sisyphus
12/5/2012 | 2:44:36 AM
re: Nortel/Avici: Getting Together?
> ... It may in fact require less silicon area
> and high speed signals to implement the
> distributed fabric, as the central fabric is a
> simpler "brute" force approach.

Depends, depends, depends. The typical approach to a distributed fabric is a bus. Unfortunately there are architectural issues associated with that that are unacceptable in high-performances designs that are supposed to scale - bus *really* demands a brute force approach: you have to throw a huge bandwidth reserve at the problem.

Other distributed fabric approaches can be boiled down to multi-stage (which is what you refer to, I am sure) and the full-mesh approach, the latter being another brute force approach that doesn't scale for the high end.

As to multi-stage, in practical applications I seldom have seen their architectural elegance really come through. The basic issue is the line card architecture - since the bandwidth going out of it to the back-/midplane is usually fixed -and typically not brutally over-engineered-, it all winds up behaving pretty much like a centralized crossbar no matter what actual switch approach you use "in the middle".
unet
unet
12/5/2012 | 2:44:32 AM
re: Nortel/Avici: Getting Together?

In cetralized switching the ports/capacity used to connect one chassis with another can be considered as revenue generating because the same ports/capacity can be used to connect an I/O card.

In distributed switch fabric the ports/capacity used to connect one chassis with another can only be used for such purpose and not for any thing else.

A better measure may be total internal capacity needed to provide any-to-any non-blocking vs external capacity.

For instance, 512 OC-48 switch constructed based on 3-stage Clos scheme using 16x32 and 32x32 switching elements will have total internal ports = 16x32+ 32x32+16x32=2048. That means internal capacity of 5G vs 1G traffic present a the inputs!

In contrast, a 512 port switch realized using a 4-dimensional toroidal switching fabric will require 128 4x8 switching elements for a total internal port count of 128x8=1024! The internal capacity is less (2G vs 5G) compared to the first case.

However, not withstandin such difference, the benefits of simplicity of centralized switching should not be lost. If 1/2 Tbps switch can be used over next 5-10 year horizon, carriers will recover their investment as opposed to installing a 5Tbps switch. Though both may server the 5-10 period, beyond that the 5 Tbs switch will be useful. However, it is most likely carrier will need to explore replacing either of them because components etc used in either of them might become absolete or paradigm changes - look at Nortel VOIP announcement today - we worked at one time on a switch to have 40,000 POTS ports - one may not need that many on a class-5 any more as class-5s will be gone! with DLCs or something else providing BORSCHT and packetizing voice for switching elsewhere.
Tony Li
Tony Li
12/5/2012 | 2:44:31 AM
re: Nortel/Avici: Getting Together?
So at least the multi-chassis guys (Avici, Gibson, HFR) have an answer to that space exhaustion problem.
----------------------

One of the assumptions implied in this turns out to be simply false.

Tony
tekweeny
tekweeny
12/5/2012 | 2:44:30 AM
re: Nortel/Avici: Getting Together?
BayRS Routers are still selling, BayRS code continues to be upgraded and both are still running some of the largest enterprise networks in the world.

I almost fell out of my chair after reading this one.

Going from over 40% market share in the mid 90s to less than 1% market share now and you are spinning it like it's a success. They need to make you a senior manager! Or are you already one, at Nortel?

P.S. Which of the worlds largest Enterprises still use Bay now, other than antiquated legacy CPE?
turing
turing
12/5/2012 | 2:44:29 AM
re: Nortel/Avici: Getting Together?
The deal is done. 3 year resell contract.

http://www.nortelnetworks.com/...
turing
turing
12/5/2012 | 2:44:29 AM
re: Nortel/Avici: Getting Together?
>One of the assumptions implied in this turns out
>to be simply false.
----------

And that would be....?
turing
turing
12/5/2012 | 2:44:28 AM
re: Nortel/Avici: Getting Together?
You cannot reasonably scale a topology where one chassis connects to all other chassis, so you are forced into having a limited fanout and thus some constrained topology. Those topological constraints make it somewhat more difficul to provide full any-to-any bandwidth.
--------------------

I don't pretend to understand Avici's fabric, but I was told they do indeed speed up the fabric links multiple times relative to the port speed in order to provide any-to-any non-blocking throughput. But their pricing still seems to be on par with C/J routers.

Having said that, I'm not sure it matters. You cite "topological constraints" and possibly non-blocking for worst case, but it's still far superior than having to connect multiple cisco routers with 192c - that's seriously blocking and much more expensive. (the Avici expansion cables were less than the cost of one OC-3 port, and no other hardware was needed)

Or are you just comparing their's to a Gibson/Matrix approach?
Tony Li
Tony Li
12/5/2012 | 2:44:20 AM
re: Nortel/Avici: Getting Together?

I was just looking at switching topologies. Interconnect via revenue ports is unlikely to be competitive.

Tony

p.s. What are their margins?
BobbyMax
BobbyMax
12/5/2012 | 2:44:14 AM
re: Nortel/Avici: Getting Together?
Nortel has previously got together with Motorola and few other companies. Nortel woud be worst off if it had collaborated with Motorola.

Avici's router products have not been widely received. During the early days of the company received some contracts from AT&T. AT&T had to cancel the contract because of poor performance.

It looks like that Nortel has forgot its acquistion of Xros for about 4,5 Billion dollars. There is a lot of similarity between the Xros and Avici founders.

Terrible mistake for Nortel. Do not wash away your previous experiences.
stomper
stomper
12/5/2012 | 2:44:10 AM
re: Nortel/Avici: Getting Together?
> You cannot reasonably scale a topology where
> one chassis connects to all other chassis, so
> you are forced into having a limited fanout
> and thus some constrained topology. Those
> topological constraints make it somewhat more
> difficul to provide full any-to-any bandwidth.

Tony,

I believe you are still thinking in terms of the
Procket/Cisco/(and non-Clos) Juniper
architecture.

A mesh or torus architecture does not have to be
limited to connecting "chasis", but can be the
basic fabric of which connecting additional
chasis is just an extension.

I agree with your statement that the worst case
for a torus is 180 degrees, and that the BW
must be sized to handle transit traffic.
That's why RPR is a tough technology, the BW
requirements grow linearly with the ring size.
However, in a higher dimensional fabric, the
BW requirement only grows at the dimensional root
of the size. E.g. 3D, cube root.

Switch theory requires a speedup for the fabric,
based not only on topology but fragmentation and
other policies. However, this speedup requirement
is reduced in higher dimension topologies to
the order of that of simpler switches.

-S
<<   <   Page 6 / 10   >   >>


Featured Video
Upcoming Live Events
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events