re: Cisco, Sycamore Circling Lucent's ATMThe problem is that the "line-rate throughout versus packet size" curve resembles a saw-tooth pattern. Throughput does not reach 100% until packet size = 70B and it stays at 100% for a while and falls to 85% at (88B-90B) and climbs in a linear fashion to 100% at ~ 108B or so.
What is mysterious is that only one OC-192 (Yes, ONE OC-192) card is involved in the test!!! 12416 is supposed to take as many as 15 OC-192s
re: Cisco, Sycamore Circling Lucent's ATMLucifer says, "Lucent took a great company, Ascend, and ran it into the ground."
another ascend/cascade glory days reveller. lucent didn't hack anything, cascades gx, 8 years later, is still the best selling atm product around. atm is a core network technology. the core network is overbuilt. anyone ever think of that?
lucent did kill oz, since it thought mpls was a better direction. now its killed tmx since mpls is dead. when you look at it objectively, who wouldn't have made the same decisions given the market conditions of the time.
re: Cisco, Sycamore Circling Lucent's ATMfutureisbright, so LR hit on a good idea to spur a lot of empty discussion. if the question is wrong, all the answers by definition are meaningless.
pat russo said atm was a core business. they cancelled tmx880, but decided to milk gx for all its worth. for every one person that says atm is the future, you will find 10 on this board that say mpls is the future. since lucent cant succeed in mpls, it doesn't want to invest too much in atm, the answer is keep gx and dump tmx.
lucent isn't selling its atm business. start with that premise and maybe we can have a realistic discussion on what lucents data future.
re: Cisco, Sycamore Circling Lucent's ATMi'd be interested in this board's view on MPLS.
Seems overly comples for efficient implementation. Scott Bradner thinks it's a Bellhead attempt at job security (my apologies to Scott for the nuance in my interpretation)
re: Cisco, Sycamore Circling Lucent's ATM> The problem is that the "line-rate throughout > versus packet size" curve resembles a saw-tooth > pattern. Throughput does not reach 100% until > packet size = 70B and it stays at 100% for a > while and falls to 85% at (88B-90B) and climbs > in a linear fashion to 100% at ~ 108B or so.
This isn't very surprising. Depending on how packets are quantized into memory buffers and how those buffer are linked together to form packets and queues of packets, you will have discontinuites in line-rate performance for small packet sizes.
It's interesting to test every small packet size at line rate, but a more realistic test would use a traffic distribution which mirrors real packet sizes found in the Internet.
re: Cisco, Sycamore Circling Lucent's ATMThe problem about "real packet sizes" is that we do not have an idea of what the packet sizes are going to be in the future. Optimizing an architecture for a "real packet size Internet mix" is very dangerous and is not consistent with the multiservice framework.
Everyone agrees that "real Internet packet mix" does not pay bills!
re: Cisco, Sycamore Circling Lucent's ATMMPLS was conceived to solve the TE problem by collapsing 2 disparate Control planes and making life easier for Network operators.
Plus it found applns in BGP-MPLS VPNs, fast reroute etc.
Whatever happened to MPLS? What are the problems plaguing it that makes SPs reluctant to deploy MPLS?
__________________________________________________ i'd be interested in this board's view on MPLS.
Seems overly comples for efficient implementation. Scott Bradner thinks it's a Bellhead attempt at job security (my apologies to Scott for the nuance in my interpretation)
re: Cisco, Sycamore Circling Lucent's ATMWhat is mysterious is that only one OC-192 (Yes, ONE OC-192) card is involved in the test!!! 12416 is supposed to take as many as 15 OC-192s --------------- Its likely a problem with how they do the packet processing on the card itself rather than a problem with throughput of the switch. (though it could be buffering into the switch).
Depending on the size of data, some vendors (and chipsets) use different strategies for processing the packet. And at the points where they switch strategies, there can be performance discontinuities.
I would guess that cisco has a special forwarding feature just for 40-bytes so they look good on benchmarks. And that the feature doesn't work for other (small) sizes.
re: Cisco, Sycamore Circling Lucent's ATMi'd be interested in this board's view on MPLS.
Seems overly comples for efficient implementation. Scott Bradner thinks it's a Bellhead attempt at job security (my apologies to Scott for the nuance in my interpretation) ------------------------------------
MPLS is a nice, flexable generic encapsulation mechanism that can be put to any number of purposes to solve specific problems.
If you talk to the people who deal with MPLS in the "abstract" (i.e. seperate from the problems being solved), its not going to make much sense. But if you look at the reasons why service providers wanted it and the problems they are solving by using it, it makes lots more sense.
Yes, there are other ways to solve every problem MPLS solves, but that doesn't mean that MPLS isn't needed.
Its not overly complicated. Its simple unidirectional virtual tunnels with a subset of RSVP as a signal mechanism. I dont know what could have been cut out from that to make it simpler.
(now of course there is a whole catagory of "useless" MPLS stuff like CR-LDP and whole bunches of drafts that never went anywhere. They make it complicated to learn about MPLS, but they don't complicate MPLS itself).
re: Cisco, Sycamore Circling Lucent's ATM> The problem is that the "line-rate throughout > versus packet size" curve resembles a saw-tooth > pattern. Throughput does not reach 100% until > packet size = 70B and it stays at 100% for a > while and falls to 85% at (88B-90B) and climbs > in a linear fashion to 100% at ~ 108B or so.
This isn't very surprising. Depending on how packets are quantized into memory buffers and how those buffer are linked together to form packets and queues of packets, you will have discontinuites in line-rate performance for small packet sizes.
It's interesting to test every small packet size at line rate, but a more realistic test would use a traffic distribution which mirrors real packet sizes found in the Internet.
Following on with a little more specifics to OneMoreByte.
Every packet/cell switch I have ever seen uses some kind of internal cell with an internal cell header to move the data around. Many people used 64 byte data blocks to carry around ATM cells in ATM switches. Pick a clock speed where the cell rate of 64 byte cells was >= the cell rate of the incoming ATM cells and you would always keep up assuming your lookup processing was fast enough.
For frame (read IP) based switches the choice was not as clear. Minimum packet sizes changed from L2 protocol to protocol. Some people picked 64, some picked other sizes. Depending on what you picked, you would generally suffer a 50% drop in datapath capacity at 1 byte over your cell size.
You might also see corresponding drops in capacity at 2N+1 and 3N+1. The only saving grace here is to run your datapath at 2X the minimum speed so that at N+1 you can keep up.
What is mysterious is that only one OC-192 (Yes, ONE OC-192) card is involved in the test!!!
12416 is supposed to take as many as 15 OC-192s
Scratching my head :-) Any pointers appreciated.