x
<<   <   Page 8 / 10   >   >>
arch_1 12/5/2012 | 1:39:00 AM
re: Axiowave Queues in the Core Indianajones said:
"MPLS does provide a connection-oriented framework to handle this. You are raising the issue of the cost of adding this connection-oriented framework to IP, which is a very valid point. "

I am painfully familiar with MPLS. unfortunately, it does not in fact provide a UNIVERSAL connection-oriented service in today's networks. To use MPLS to avoid oversubscriptoin, we need user-to-user connections on demand. i.e. the MPLS equivalent of X.25, FR. or ATM SVCs. If you do not have user-to-user VCs, then you cannot have guaranteed user-to-user bandwidth. The fact that the RSVP protocol provides a mechanism for SVCs is irrelevant, since the Internet as a whole does not support user-to-user MPLS SVCs.

indianajones furhter said:
"I really believe that it can be done if you limit the number of different service classes to something small, say 3 to 4. The number of states that one needs to keep in the core is within limits and we can still achieve a good service model without over-provisioning. I am not arguing for a flow-based core - too much complexity with no real tangible return."

You imply that the core will not in fact be sensitive to per-flow guarantees. I can conceive of an aggregation scheme that can reduce the amount of state information in the core, but such a scheme will require new protocols that have not been specified or implemented.

indianajones said fruther:
"It is not entirely true that premium traffic cannot take up a lot of bandwidth."

I did not say that premium services would be a low percentage of total bandwidth. I was quoting another post that mad this assdertion. I said that IF premium services are a low percentage, THEN the core can (almost always) provide premium service without per-flow state. If premiun services comprise more than a small percentage of the bandwidth, then the current protocols are insufficient to guarantee premium service, even assuming that each router can perform "perfect forwarding."
whyiswhy 12/5/2012 | 1:39:00 AM
re: Axiowave Queues in the Core Nice try to dodge the question(s). I was going to walk you through this so you could learn something, but I can see that won't work, so here goes:

One can rent bandwidth much much cheaper than it would cost to buy equipment, install it, operate and maintain it, etc. In other words, it is available to you BELOW TRUE (GAAP) COST.

That's any rational persons definition of cheap. In fact, its irrationally cheap.

Now with respect to Axiowave, that means there is no simple financial reason for any major carrier to buy their equipment. The market is over-supplied. Period.

Thus my comments re: options and a dollar.

-Why
Sakarabu 12/5/2012 | 1:38:58 AM
re: Axiowave Queues in the Core why have multiple customers purchased the equipment (hint: not all deals have been announced)
Truelight1 12/5/2012 | 1:38:49 AM
re: Axiowave Queues in the Core Saka...

your a rank amateur......whyiswhy rightly points out there is no room.

Getting a customer(s) is easy unless your an amateur-engineer. Come back when there are 50.

Go Ax-off
indianajones 12/5/2012 | 1:38:46 AM
re: Axiowave Queues in the Core Oh! now I get it. Your definition of "cheap" is relative. Like, I lost gazillion $ this quarter but I am better off because my competitor lost 2 gazillion $. Or, it costs $10 million to light a circuit between LAX and NYC but it is so "cheap" because if I were to buy and install the equipment myself, it would have cost me $20 million?? whyiswhy, $10 million is $10 million out of the carrier's pocket, especially if he is not making all that money.

Your silly definition of "cheap" not withstanding, a carrier's definition of "cheap" is strongly tied to his profits and cash flow.
indianajones 12/5/2012 | 1:38:46 AM
re: Axiowave Queues in the Core arch_1 said"

I am painfully familiar with MPLS. unfortunately, it does not in fact provide a UNIVERSAL connection-oriented service in today's networks. To use MPLS to avoid oversubscriptoin, we need user-to-user connections on demand. i.e. the MPLS equivalent of X.25, FR. or ATM SVCs. If you do not have user-to-user VCs, then you cannot have guaranteed user-to-user bandwidth. The fact that the RSVP protocol provides a mechanism for SVCs is irrelevant, since the Internet as a whole does not support user-to-user MPLS SVCs"

The right prescription may be for carriers to start provisioning LSPs with service-specific requirements in their private IP networks. I think there are adequate tools and mechanisms to do this today. The right place to start would be to provide POP to POP guarantees using LSPs and diff-serv aware TE, similar to ATM PVCs and go from there.

Re: second point, don't we have TE LSPs as the framework already to do what you want? Why do we need additional specifications?
arch_1 12/5/2012 | 1:38:41 AM
re: Axiowave Queues in the Core I asserted that QoS guarantees require user-to-user SVCs.

Indianajones responded:
"The right prescription may be for carriers to start provisioning LSPs with service-specific requirements in their private IP networks. I think there are adequate tools and mechanisms to do this today. The right place to start would be to provide POP to POP guarantees using LSPs and diff-serv aware TE, similar to ATM PVCs and go from there.

Re: second point, don't we have TE LSPs as the framework already to do what you want? Why do we need additional specifications?"

Preprovisioned POP-to-POP LSPs fail. Consider the following grossly-oversimplified network: The core is a set of highly-interconnected routers, to which 1001 PoPs are connected, each with a single 10Gbps link. Your proposal requires that each POP have a bandwidth-reserved LSP to each of the other PoPs. That's 1000 LSP terminations at each PoP. In this arrangement,the preprovisioned bandwith guarantee must average at most 1Mbps per LSP.

The only way around this is to adjust the bandwidth guarantees on the LSPs on demand, when an endpoint needs to add bandwidth i.e., when a new user-to-user connection is established. The IETF could come up with a protocol to do this, but it will take several years from initial IETF draft to universal deployment, and we are not yet at the draft stage.
carrierguy 12/5/2012 | 1:38:38 AM
re: Axiowave Queues in the Core procket_guy

No, I am not from Axiowave. Recall that I responded to Tony's question about 8812 results. I had not brought it up before his query.

Note to Tony: You are right, Eric Weiss seems to have removed the ppt even though the link is still there. I can swear that it was up a couple of days back. No worry, I will e-mail the ppt to your .li email address.

procket_guy, thanks for the pdf.
It took me a while to read it. I realize you guys did not write it but it is one of the most poorly written documents I have read in a while. It is long and full of confusing definitions and terminology. Basically, it looks like they evaluated the ability of the different line card schedulers to split the bandwidth precisely between BE and LBE traffic. It really did not concern itself with performance under switch fabric and port congestion.

I am not sure who told them to test 7609 and M10. A more sensible comparison would be the 12410 and T-series platforms. Everyone knows that M-series has a flawed WRR scheduler and does not consider packet size when scheduling - They have since remedied the situation with the T-series. The Cisco 12410 has implemented DWRR in their Engine 4 line cards, although it does not work under a lot of scenarios. The point is the test compared WRR of M10 with the DWRR of 8801 and concluded that DWRR of 8801 does a better job in bandwidth allocation than the WRR of M10 and God knows what in 7609. Is that a surprise? One really does not need an elaborate test (with a boring text to accompany it) to reach that conclusion.

Let me know if I am missing anything, but there were no details on maximum latency, jitter or packet loss when the OC-48 and GigE links were over-loaded. No details on what kinds of packet size distribution were used, whether traffic was bursty etc. Those would have been more realistic and useful. I am sorry - I would really like to see maximum latency, jitter and packet loss results for the various platforms under stress and unpredictable traffic patterns and I did not see that.
bobcat 12/5/2012 | 1:38:37 AM
re: Axiowave Queues in the Core >>Preprovisioned POP-to-POP LSPs fail. Consider the following grossly-oversimplified network: The core is a set of highly-interconnected routers, to which 1001 PoPs are connected, each with a single 10Gbps link. Your proposal requires that each POP have a bandwidth-reserved LSP to each of the other PoPs. That's 1000 LSP terminations at each PoP. In this arrangement,the preprovisioned bandwith guarantee must average at most 1Mbps per LSP.

The only way around this is to adjust the bandwidth guarantees on the LSPs on demand, when an endpoint needs to add bandwidth i.e., when a new user-to-user connection is established. The IETF could come up with a protocol to do this, but it will take several years from initial IETF draft to universal deployment, and we are not yet at the draft stage.


Huh...?
Sounds like a case for ATM/PNNI in the core and UNI/SVCs at the edge utilizing soft PVCs...Sonet transport for APS and VT concatenation and OAM Nah!!!
Maybe MPLS/LSPs over ATM soft PVCs, and ATM ABR-QOS end to end? Nah!!!!
Wait for the IETF to figure it out.
Lunch Time..!
whyiswhy 12/5/2012 | 1:38:33 AM
re: Axiowave Queues in the Core "Your silly definition of "cheap" not withstanding, a carrier's definition of "cheap" is strongly tied to his profits and cash flow."

You seem to be beginning to understand. Any carrier you ask would tell you bandwidth (for bandwidth services) is cheap....too cheap to make any money from it, and thus too cheap to be buying any new equipment (to service the demand).

And that equation would be no different if Axiowave's stuff could do 10X better than the existing equipment; the numbers are off by about 100X.

Axiowave will not find an exit that makes their investors any money.

-Why
<<   <   Page 8 / 10   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE