& cplSiteName &
Comments
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 9 / 10   >   >>
truelight
truelight
12/5/2012 | 2:42:59 AM
re: Nortel/Avici: Getting Together?
The management at Nortel have no vision or motivation to build newer products.
null0
null0
12/5/2012 | 2:42:48 AM
re: Nortel/Avici: Getting Together?
Unet

Nicks publications can be found at http://tiny-tera.stanford.edu/...

Cheers

Null0
stomper
stomper
12/5/2012 | 2:42:44 AM
re: Nortel/Avici: Getting Together?
It seems there has been a lot of activity since
I last checked in. Too much good football on ...

Tony Li wrote:
> I'm not sure that I agree with you however. I
> did scan the Stanford stuff that Nick has done
> and it only reinforces my points. Are you
> thinking of something else?

I'm not sure which of my assertions you are taking
issue with ? I'll assume it is my approximation
of 2X for the speedup requirement of multi-dim
fabric scaling to hundreds or a few thousand
nodes.

As I'm sure you know, this number is a function
of a number of variables. But I'll stick by 2
as a reasonable design center based on the math,
current technology, and complexity in achieving
better.

For reference, I think Professor Dally is the
canonical source in this field. He teaches
a class on the subject at Stanford and his lecture
notes or on line at:
http://cva.stanford.edu/ee482b...

> I'd also like to point out that for density, the
> 2x switch cards are clearly superior, based on
> available product.

Yes, an impressive implementation effort indeed.
Kudos to you and your team for pulling it off.
Am I correct in understanding that the 8812
supports 96 10G ports in a rack with redundant
switch cards ?

byteme3 wrote:
> Actually, it's debatable if the cost difference
> is more than trivial. Cost of big ASICs (ie,
> fabric chips) are around $200-300 each (after
> initial NRE), right? The customer price of core
> router linecards is like what, $150,000-250,000
> list? A few more or less ASICs in the system is
> peanuts.

Think of it this way: the cost difference to
the customer is not trivial. The switch card
customer cost is at least on the order of a
line card.

And I don't believe it is trivial to the mfg
either. A switch card usually requires a number
of ASICs, expensive high speed SRAM, perhaps
DRAM. And worst of all, consider the development
cost. A leading edge ASIC development may run
as much as 10's of millions.

unet wrote:
> Does one really need internal speed up in
> multi-dim fabrics. I think it will help but how?

Yes, in direct multi-hop fabrics internal overhead
is necessary to account for transit traffic.
The amount of overhead is a function (mostly) of
the number of dimensions, maximum pin allowance per node, pin signaling rate, and how big a fabric
you intend to build.

> "you only need about 50G/6G wires per connect."
> can you explain this please?

Sorry. For a worst case speedup of 5X over a
10G line, which is a 50G BW requirement, I was
taking a guess at the number of wires that would
be required. Today folks have electrical
tranceivers designed for integration on an ASIC
that operate over backplanes at 6Gbps (and
beyond). For a 3D mesh or torus, you need 6 links
(or connections) per node. So you would need
about 50/6(6) = 50 full-duplex signals. These
signals are differential. My feel is that on the
order of 200 wires would be necessary with today's
technology.

For a 40G line, this would be about 800 wires.
This is probably a lot for a single device,
but could be split up among multiple devices on
a board.

And I feel a more reasonable speedup is probable
around 2X, which would only require about 320
wires.

> Also, how do we compare centralized vs
> distributed switch fabric based on just internal
> speed up?

With lots of math based upon particular
implementations I would think.

Regards all ...

-S
Tony Li
Tony Li
12/5/2012 | 2:42:40 AM
re: Nortel/Avici: Getting Together?

Stomper,

I didn't find evidence to back up your 2x number,
and the link that you pointed to didn't seem to
have a lot of working content.

Yes, an 8812 gives you 48 OC-192's (or 10GigE) in a half-rack. Thanks. The hardware guys did a great job.

I should point out that if you don't have your switching and buffering on a central switch card, then you end up pushing most of it out to the line cards. In fact, if you can't guarantee that you can inject all traffic into the fabric, you will need both input and output buffering. Thus, the cost argument only goes to the redundant card.

For a torus, don't you need 4 connections?

Tony
stomper
stomper
12/5/2012 | 2:42:36 AM
re: Nortel/Avici: Getting Together?
Hi Tony,

Let me respond out of order:

> For a torus, don't you need 4 connections?

A torus is just a mesh with end-around
connections. It dosen't imply any particular
radix. For example, a 2D mesh/torus will have
4 connections. A 3D, 6 connections.

> I didn't find evidence to back up your 2x
> number, and the link that you pointed to didn't
> seem to have a lot of working content.

Apologies. Since last I checked, the links seem
to have become garbled (grad student to blame?). You can still reach the content by editing
the links, e.g.:
http://cva.stanford.edu/ee482b...
should be:
http://cva.stanford.edu/ee482b...

My 2X number is based on a number of assumptions
about current art. I think it is a reasonable
number for 3D or higher dimension fabrics that
use load balancing techniques to spread traffic
over multiple paths, and that don't need to scale
beyond 1-2K or so nodes. Remember that this is
my overhead guess for scaling alone, the actual
value may be a greater factor over the line.

I think if you look into Prof. Dally's and
derivative published work you will agree.
I don't think this is the correct forum to push
into greater detail.

> Yes, an 8812 gives you 48 OC-192's (or 10GigE)
> in a half-rack. Thanks. The hardware guys did a
> great job.

Another question: does your switch support any-
to-any for 96 ports. E.g., can you build a logical
96 port router out of 2 half-rack 8812's ?

Thanks,
S
unet
unet
12/5/2012 | 2:42:35 AM
re: Nortel/Avici: Getting Together?


Tony,
you are right in raising the torus issue. Look at my post :
http://www.lightreading.com/bo...
(each 4-D torus switching node should have 4x12 switching capacity instead of 4x8 mentioned).

Seperately, another poster (turning) raised an interesting point which is very very important to my mind- how do we avoid fiber crowding from 100-200 OC-3s or OC-12s or OC-48s in a small physical area of a single chassis? This is where multi-chassis solution would be useful. Why would there be hundreds of OC-Ns in a CO? That depends on how carriers are going to grow their network, how FTTx plays out and how triple-play picks up.
http://www.lightreading.com/bo....
One solution to avoid fiber crowding is to deploy multiple switches/routers in each CO even if the capacity of each switch/router is not fully used.

I am an advocate of simplicity and centralized fabrics do meet that crieteria. Distributed fabics have the advantage of scalability and superior and elegant fault tolerance. Given this,

I think carriers can take a 2-step approach:

1. Fist change their network architecture by offering new services over a NG network. This can be done by using multiple centrlized switches/routes.

2.About 5-7 years fater step-1, replace mutliple switches/routers in each PO with a scalable box that uses distributed fabric.



Stomper,
when I asked 'how do you compare multi-dim fabrics vis-a-vis centralized' I mean what metrics you have in mind ?

If you read my post listed above, I allude to using wires/ports or internal capacity as a measure of cost. Your account of 200/800 wires for 10G/40G per line card is similiar to that. As Tony pointed out you have not accounted for backplane BW for each dimension. Plus I am not sure one really needs 5x speed up if it is just to account for overhead and some transit traffic. Any proof like that of Clos (k>=2,k the number of middle stage switches)? so one can know what is the minimum speed up required in a given multi-dim topology? A TDM switch based on multi-dim design didn't use speed up and it worked fine. Of course it was helped by the fact that about 10% input port BW need not be switched. So internally using an 2.5G was sufficient.

thanks.



Yes, in direct multi-hop fabrics internal overhead
is necessary to account for transit traffic.
The amount of overhead is a function (mostly) of
the number of dimensions, maximum pin allowance per node, pin signaling rate, and how big a fabric
you intend to build.
Mezo
Mezo
12/5/2012 | 2:42:19 AM
re: Nortel/Avici: Getting Together?
Perhaps you and Stomper should build yourselves a funky architecture non-stop router...that works...see you in (4) years...maybe the market will be ready then...

You boyz are talking about making bigger routers when today's market is just getting ready for a 480gb router...which will support years of growth...the need distributed multichassis designs is a long way off...

So in the mean time...I've heard you need to secure $300M and get started...you already have a name:

STOMPUNET...partners to the end :]
Tony Li
Tony Li
12/5/2012 | 2:42:15 AM
re: Nortel/Avici: Getting Together?
Mezo,

The history of this market is somewhat interesting and relevant. At any point in time, the industry is capable of producing a router at the (reasonable) limits of technology. That improves over time. However, when the next generation of technology is released, the customer is typically presented with a forklift upgrade, where older systems must either be scrapped or reassigned to other tasks where they are not operating at their optimal design points.

Thus, the goal that many of us aspire to is to create a system that scales in a 'nice' fashion so that forklift upgrades are avoided. Thus, yes, we are discussing architectures that are not necessary immediately. Perhaps, depending on how large we scale, not for years or even decades. Consider it a long term investment in technology.

Regards,
Tony
Tony Li
Tony Li
12/5/2012 | 2:42:07 AM
re: Nortel/Avici: Getting Together?

Unet,

Let me see if I can add to my reasoning for you. The primary reason that old systems are obsoleted is because the required bandwidth capacity of the system has increased. With traditional development lines, it's necessary to use revenue interfaces for interconnect. Quickly, the cost of the interconnect makes the old chassis inefficient.

Yes, there will continue to be new technology brought to bear on the problem, and networks will require new capabilities above and beyond what we can imagine today. This will be addressed by new technology in the individual nodes (be they chassis or line cards). The common denominator will continue to be the bandwidth of the switching fabric used to interconnect nodes.

The pragmatic consideration that carriers are deliberating in this post-bubble-burst world is how to ensure their own profitability. CapEx budgets can no longer be padded to support assets that become inefficient long before they are depreciated. Thus, a fundamental requirement of this generation is the ability to scale, not only with current technology, but in such a way that future technology can also interconnect and interoperate with existing gear.

Regards,
Tony
unet
unet
12/5/2012 | 2:42:07 AM
re: Nortel/Avici: Getting Together?
Tony:

The second part of your reasoning is false and contradicts with what you say in the first part. Howe can a router/switch designed by stretching technology of its time continue to be relavent 7-9 years later. At some point forklift upgrade will be required. 7-9 years would be the life time for telecom equipment. The 5ESS had longer life-span of about 15-20 years for various reasons. We are seeing first time replacement of those switches. Same thing will happen to IP/Switching equipment to be deployed in the next 2-3 years. All said and done the technology available today may appear inferior when we look back 7-9 years from now. Plus new applications such as SANs, Utility computing will require new solutions at that time. So one can't desing equipment that is forklift proof.
That is one pragmatic consideration carriers should take into consideration - don't look for the ultimate scalable, the ultimate ideal box that is whatever -hitless etc - just deployed practical , manageable, workable and cost-effective solutions and replace them once investment is recovered and technology is obsolete.
<<   <   Page 9 / 10   >   >>


Featured Video
Upcoming Live Events
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events