x
Page 1 / 10   >   >>
carrierguy 12/5/2012 | 1:40:41 AM
re: Axiowave Queues in the Core Scott,

You mention in your article that Caspian and Procket have flow-based architectures and IP QoS. Is that true? Procket does not have a flow-based architecture. Nowhere on its website does Procket talk about maximum latency and jitter guarantees.

This is the first time someone is coming out with an IP product that can guarantee maximum latency values. If it works as claimed, it has real value.
And to their credit, Axiowave has announced a customer deployment that can guarantee atm slas.
In fact, you guys have a news release about Aleron touting its IP slas. ( I am assuming Aleron and PowerNet Global is the same company).
I have not come across any service provider touting maximum point-to-point latency slas for IP services. So that is definitely a new thing in the IP world.

There is a thread in NANOG about the latest Cook Report about why Best effort business and slas will doom carriers. Good to note that people are finally waking up to this ticking timebomb.
Tony Li 12/5/2012 | 1:40:40 AM
re: Axiowave Queues in the Core
You are correct, Procket's architecture is not flow based.

Providing a latency bound for IP is not hard. Most architectures will allow you to set your queue sizes down to whatever you desire. The hard part is providing enough buffering for high performance TCP.

I should also point out that most other architectures are quite capable of handling a high percentage of high priority traffic. When things become 'interesting' is when the high priority traffic starts to burst over the line rate, even instantaneously.

Tony
tjs 12/5/2012 | 1:40:40 AM
re: Axiowave Queues in the Core
I remember one visit to plexus... The Nexabit boxes were stacked to the ceiling. "They can't make 'em work and we can't keep 'em" was the whispered verdict. Annoucing betas with little know guys is worthless. When VZ or BT announce that means something.

The whole thing was a fable. Anyone who buys this story is just living the "greater fool theroy"

carrierguy 12/5/2012 | 1:40:39 AM
re: Axiowave Queues in the Core Tony,

You've got to be kidding.

what you are referring to is setting egress queue buffer. The tough problem is providing a latency bound for premium traffic when there is switch fabric contention and egress port contention. You can always set queue size, but if your architecture cannot guarantee that high priority traffic will not be subjected to packet loss or jitter issues, then it does not matter.

I am sure (in fact I know) that one can configure maximum queue depths for Cisco 12xxx and Juniper platforms too. Then why do you get the nasty results you get from those platforms?

And what do you mean "high priority traffic ... burst over line rate" ? High-priority traffic always needs to be less than the bandwidth of the egress interface.
Scott Raynovich 12/5/2012 | 1:40:38 AM
re: Axiowave Queues in the Core Yes poor wording on that. I have changed it. Caspian was the flow-based proponent, not Procket.
indianajones 12/5/2012 | 1:40:36 AM
re: Axiowave Queues in the Core Tony,

The hard part is providing latency and jitter guarantees when there is transient congestion in the network. Your statement "providing a latency bound for IP is not hard" assumes a very lightly loaded link. Try congesting one of your ports and watch what the maximum latency of high priority traffic is like. You would be unpleasantly surprised.

The reason for ATM is that people could design switch architectures with fixed cell sizes and guarantee CDV and CTD. Folks could never do that with variable sized IP packets.

Btw, tjs, isn't Aleron an announced customer and not a beta site? At least that is what the press release says .... Also if you look at Aleron website, they seem to have a pretty big sized OC-48 backbone.
uguess 12/5/2012 | 1:40:35 AM
re: Axiowave Queues in the Core Indianajones,

You are not exactly correct. Congestion doesn't necessarily cause latency for high priority traffic to go way off base, when you have QoS enabled with strict priority scheduling and assign queue depth for high priority properly.

uguess
Upside_again 12/5/2012 | 1:40:32 AM
re: Axiowave Queues in the Core "given that Axiowave is focusing on Tier 2 and Tier 3 carriers"

These re-do nexibite boys are selling core routers (better than Juniper and Cisco) to smaller carriers? Oh ok........

I don't think even lucent will buy into this (or would they?)
NetDiva 12/5/2012 | 1:40:31 AM
re: Axiowave Queues in the Core The factors that contribute to the non-gauranteed
latency and jitter behaviour of IP traffic are probably the variable packet size and connectionless nature of the IP networks ( as
opposed to fixed cell size and connection oriented bandwidth negotiated ATM network ).

For high priority IP traffic though it is possible to guarantee the min. latency bounds
if the node implements strict priority or preemptive packet scheduling in the switching fabric as well as egress port queues for the high priority traffic.
The end-to-end latency variation bounds though for high priority IP traffic though may be difficult because of the differnet internet node QoS architectures of the network elements and scheduling schemes supported.
It is possible to do some network delay estimation,at application level and do the
queue depth tuning at service level ( but it becomes application specific again , and one
will fall in flow based category again ).

This is again based on the assuption that this high priority premium IP tarffic is engineered or in other words not oversubscribing the link capacity , but is allocated a small percentage of the link capacity.

If this traffic starts using full link capacity , you know what happens to other bursty traffic on the link .
NetDiva 12/5/2012 | 1:40:31 AM
re: Axiowave Queues in the Core The factors that contribute to the non-gauranteed
latency and jitter behaviour of IP traffic are probably the variable packet size and connectionless nature of the IP networks ( as
opposed to fixed cell size and connection oriented bandwidth negotiated ATM network ).

For high priority IP traffic though it is possible to guarantee the min. latency bounds
if the node implements strict priority or preemptive packet scheduling in the switching fabric as well as egress port queues for the high priority traffic.
The end-to-end latency variation bounds though for high priority IP traffic though may be difficult because of the differnet internet node QoS architectures of the network elements and scheduling schemes supported.
It is possible to do some network delay estimation,at application level and do the
queue depth tuning at service level ( but it becomes application specific again , and one
will fall in flow based category again ).

This is again based on the assuption that this high priority premium IP tarffic is engineered or in other words not oversubscribing the link capacity , but is allocated a small percentage of the link capacity.

If this traffic starts using full link capacity , you know what happens to other bursty traffic on the link .
Page 1 / 10   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE