<<   <   Page 4 / 10   >   >>
Tony Li 12/5/2012 | 12:01:46 AM
re: Luca Martini, Level 3
Tony said in a previous post that ATM offers too much from a QoS point of view and I suspect that he was refering to the not-needed customer separation in the core of the network.

That's one of many issues that one could cite. As I suspect that Mark is about to point out, when traffic is aggregated, the granularity of "true, hard" QoS is extremely painful to support. Do you really want to support per-flow queueing in the middle of the Internet? Do you really want to open up 4 VC's everytime someone clicks on a web link?

Scalability comes from aggregation and abstraction, and that is in direct conflict detailed packet handling. Such detail is highly appropriate in a congested access network. The core, however, is less likely to congest and is also going to be unlikely to do anything meaningful other than to deny service to a large traffic aggregate. As such, the additional granularity is of no benefit.

Aggregation of VP's is a good start at dealing with this, but with only one level of aggregation, you bound the architecture of the system rather severely.

There is no free lunch. The right answer is to make tradeoffs between scalability, precision, and cost.

Tony Li 12/5/2012 | 12:01:46 AM
re: Luca Martini, Level 3

The requirements that are placed in front of me are all about VoIP, where the guarantees are not as "hard" as the ones that you're apparently seeing.

In any case, MPLS implementations can certainly differ and there is no defined service requirements for any particular EXP code points, just as with DiffServ. If someone felt that they needed to provide "hard" guarantees, they are more than welcome to do so. This is a system architecture question, not a protocol definition issue.

Mark Seery 12/5/2012 | 12:01:41 AM
re: Luca Martini, Level 3 Hi Doug,

>> Years ago I worked for a company that provided equipment to MFS, TCG, BD Tel, and other for their native LAN service (10 meg). It was MFS' fastest growing service, with more than 3000 ports deployed, before Worldcom bought them and killed it ("we only sell pipes, not services...") Similar story with TCG and AT&T. <<

Ahhhh. memories......

>> One more point: customers fool themselves regarding security all the time. It's the only way to sleep at nights ;) <<


The real problem with running Ethernet over fiber to a premise is that there is no OAM capability defined in the standard. This would allow a **really** cheap box at the premise or would allow the CPE to participate at the OAM layer. Ping is not really good enough for everything SPs need to do.

Mark Seery 12/5/2012 | 12:01:41 AM
re: Luca Martini, Level 3 Tony Li Wrote:

>> But yes, I would definitely consider IP the connectionless part and MPLS as the connection-oriented part of the architecture. <<

If you really want to discuss something interesting, this is the place to start readers, head and shoulders more interesting than discussing QoS ;-)

Though I am not suggesting Tony meant to infer this, the real philisophical debate is can you develop a better connectionless network than IP, and if not, why try to emulate that mode via MPLS. Where the debate eventually leads is also a debate about whether you should have one network for CO and one for CNLS. Anyway, just a teaser too good to ignore. I'm not suggesting what the answer is, I am just suggesting its a hell of an interesting thing to discuss next time you are stuck with your friends and a few beers.


p.s. Tony, thanks for offering your opinion on the orginally posed question.
indianajones 12/5/2012 | 12:01:40 AM
re: Luca Martini, Level 3 skeptic,

I agree with your statement about being able to support CBR/rt-VBR etc. with MPLS hardware. The point I was trying to make is that people seem to assume that DiffServ and simple priority would do the job when what is required is a system architectural solution that can deliver CBR/rt-VBR like service on IP/MPLS with variable packets.
indianajones 12/5/2012 | 12:01:40 AM
re: Luca Martini, Level 3 Tony,

I certainly agree that it is a system architecture issue and not protocol definition issue. I also agree with you that per-VC state is meaningless in the core and aggregated state makes more sense.

What irks me is when people say at OC-48 or OC-192 speeds, latency and jitter are not an issue. It is not an issue when you run your links at 10% (as many LH links are being run currently). These people should go and study basic queueing theory. At low link utilizations, queueing delay is minimal whether it is an OC-3 link or an OC-192 link. Only when you start to crank the utilization, the asymptotic curve starts to kick in. If you want to buy expensive OC-192 gear and run them at 10-15% utilizations, be my guest. But don't complain that we are in telecom winter.

The key issue is whether your VoIP sessions still function properly when you are running at reasonably high utilizations. And if you do not have tight latency and jitter bounds for VoIP I doubt they would work well when you try and fill the pipe.
skeptic 12/5/2012 | 12:01:40 AM
re: Luca Martini, Level 3 Simply using Diff-Serv and prioritizing traffic will not cut it. Cisco and Juniper have been parroting this for a while now but the truth of the matter is that they have pretty unpredictable latency and jitter.
I would not confuse what cisco and juniper say
& deliver with what is *possible*.

There is nothing in the ATM notions of CBR/UBR
/VBR which could not be delivered in hardware
in an IP/MPLS router. The only issue I can
think of that would be difficult would be
control of jitter on slow-speed links if
there is large amounts of variability in
packet size.

On the vendor-side, there is lots of frustration
in terms of delivering QOS features that people
they want.....and then having them turn around
& not use them & say they wanted something else.

Or they hear lots about QOS from people who have
no intention of buying their equipment under
any circumstances because they think ATM is
the perfect solution.
Mark Seery 12/5/2012 | 12:01:39 AM
re: Luca Martini, Level 3 indiana,

>> Only when you start to crank the utilization, the asymptotic curve starts to kick in. <<

Agreed. That threshold is at about 70% right?

Mark Seery 12/5/2012 | 12:01:39 AM
re: Luca Martini, Level 3 Skeptic,

>> Or they hear lots about QOS from people who have
no intention of buying their equipment under
any circumstances because they think ATM is
the perfect solution. <<

Or they think their ATM vendor is the right partner. It is not always just the technology (and I don't mean that in a negative way - understanding, appreciating, and serving your customer is important; and therefore appreciating that not all service providers/units have the same requirements/needs)

Mark Seery 12/5/2012 | 12:01:39 AM
re: Luca Martini, Level 3 So lets talk access and Qos in general, but first a disclaimer:


I am currently involved in the access space so you should be aware that a) I might have an unstated agenda and b) I might be vague about some issues out of respect to both technology providers and service providers in that space with whom I have both explicit and implicit confidentiality requirements


What is QoS all about at the end of the day? Its all about providing throughput and traffic characteristics that are repeatable, measureable, and billable; as agreed upon by two parties (whether they be companies or applications).

An agreement means you get what you paid for, regardless of what else is going on in the network, or there is some penalty involved.

There are two three very specific issues in the access network, two of which are caused by the same thing:

a) The need to have high utilization due to the lack of bandwidth
b) The potential for head of line blocking due to the lack of bandwidth
c) The potential for head of line blocking due to bursty traffic

Copper-based access systems are starved for bandwidth. Massive over-provisioning is not an answer for a variety of technical **and** regulatory (state) reasons. So a solution is needed that can drive utlization rates very high if multiple services are going to be offered over the same facility.

The lack of bandwidth creates the potential of head of line blocking = latency+ jitter. Consider that on a 1 Mbps link, a 1522 byte frame will take 12 ms to serialize (quantitization is another term used) - whereas on a 10 Gbps link (as used in the core) the same frame will take 1.2 microseconds to serialize. Get one of those frames in front of a voice packet, and you have some problems. 12ms of added jitter/latency is a serious issue, 1.2 microseconds is beyond trivial. Consider that at 1.2 microseconds, you would have to have 1000 large frames scheduled before you, before you experienced 1 ms of extra latency. The chances of that are diminished in a priority-based system at the core (though I am not finished commenting on that and will later). Just for the record, 12ms of added latency is very bad, so if supporting voice, even VoIP, you are going to want to set the MTU lower, or find some other mechanism of chopping up that frame. ATM is one answer, though a cell size of 53 bytes is overkill at that speed (though not at DS0 for which it was designed). So what do you want to keep latency down to? When connecting to legacy voice switches, about 2 ms. When doing VoIP end to end, the conventional wisdom is that because of echo cancellers and playback buffers, and DSP speculation, there is a lot more tolerance. For end-to-end VoIP you have to look at the end to end voice budget and make a set of decisions. Just for the record, WRT the concerns of international voice propagation delay, I still tend to think you want to save latency where ever you can, just in case - you just need to have some flexibility about how you configure your requirements.

So then bursty traffic increases the likelihood of one of those large nasty frames getting in front of the voice packet, so that has to be controlled. Policing is part of the solution, but consider that there is no contract on the Ethernet link, i.e. the traffic being received is not shaped, so if you have a really strict policer, you could end up dropping a lot of traffic - so you have to deal with that basic problem.

There are other subtle aspects of QoS, like the ability to measure and record traffic demands and do capacity planning, and the ability to spot transient problems in a network; perhaps even the ability to make real time adjustments. All these issues call for a stong OAM capability. If you agree to this logic, then you have to conclude this is one advantage ATM has over MPLS (today).

ATM provides the ability to leverage per-flow QoS if you think that is part of the solution, so does (fixed filter) MPLS RSVP-TE. ATM defines QoS algorithms, MPLS does not; what that means in practice is there is less certainty about what you are getting from an MPLS vendor, and you have to spend more time with that vendor really understanding what that vendor thinks QoS is and how it is implemented - this is not a terminal problem, but it is an issue.

In theory, it is valid to say that MPLS does not support QoS as good as ATM - because there is no real specification for it that is in anyway widely considered to be part of a standard implementation. But in practice it is not. In practice, it is as valid to say that MPLS does support QoS because of the presence of modes and implementations that do not support QoS as it is to say that ATM does not support QoS because of the presence of UBR.

You really have to be specific about the control plane, OAM, user plane semantics, and PHB, before you can make such a statement with any amount of substance. In practice, neither ATM or MPLS are one immutable thing (despite the model that Tony probably had in his head when he wrote that 1999 article ;-) ), they are a toolset of things, and you need to know exactly what you are implementing and why.

Some additional thoughts:

Diffserv is by no means a model which is widely accepted as being valid, even in the IP community. There is the concern, that you can only really differentiate between two classes of service, and beyond that you are wasting your time. In addition, if you don't combine it with bandwidth reservations per class you may have a situation reminiscent of peak our traffic where one lane is great and the rest are screwed. There is a lot of interesting research on this, but I don't have it at my finger tips.

I do know carriers who will state that for connection-oriented services you still want to have reservations through the core on a per flow basis. They believe this is the only way to guarantee SLAs. They are of course right, reservation is the only way to guarantee anything. We now have a large community of subscribers who are also used to thinking that way, so regardless of the truth, that reality lives with us for a while.

Tony made the web browser analogy, and of course he was right (though he forget to mention the broken model of all those excessive TCP sessions that are created even if you are on the same site!). During the boom, there is absolutely no way in *&*^ we could have scaled the Internet using a CO model that went to the subscriber - and scaling the Internet was important, regardless of short-term business case discussions, for a variety of reasons; and any body who really has a passion for networking, IMO, wants desperately to get packets to the people, as much as Ford wanted to get cars to them.

Even if you believe you could get a SVC-based model to work that fast, you then are faced with the problem that packet processing is much more intense that TDM processing, so what works in the TDM world may not work in the packet world. When you have to develop a queue manager that is arbitrating between millions of flows, it is a night mare. Now I suppose some Network Processor employee is going to jump on and say it can be done - so I should be prepared for that ;-) But when you are developing a system that supports, for sake of argument, 64 OC-192s, that do MPLS, IP, Martini, PWE,....and a queue manager that supports millions of flows...I don't think so: you are going to loose out somewhere. The bottom line is that for any given technology you start with a gate/power budget - if you spend it on one thing, you are not spending it on another. So if you do something, there had better be a very compelling reason why you are doing it. And by the way, if you are working at 10 gig, you don't have a lot of time to get through your pipeline, which is another problem.

So what it you believe that CO is needed end to end, because that is the business model you want to work on. That's fine with me, I am all for diversity of value chains - that is a good thing for the subscriber, and a good thing for the industry. No say that CO traffic is a small number of flows compared to the CNLS box. Do vendors build a box that does say n CO+strict QoS flows and a much larger number of CNLS flows, or do SPs use the best available CO technology for the CO traffic, and the best available CNLS technology for CNLS traffic? Well that is the 64K question my friends.

Well for those that don't recognise it, the later is the Sprint model. Though they have not always articulated it that way (sometimes it comes across /is reported simply as an anti-MPLS message). But in reality, they simply have a different squint on architecture, but that is a whole other thread.....

One last anecdote on shaping since it was brought up. I was talking to a planner at a large ISP a while back and he mentioned that he didn't have enough capacity in his distribution layer (ethernet switches) and I said something like " that sounds like a problem " and he replied " well actually it is a form of traffic shaping without it intending to be that ". So you see, traffic shaping happens in all kinds of weird and wonderful ways.

Seriously, by the time traffic gets to the core it has already gone through a number of aggregation layers, and therefore is less bursty. In addiion, the core boxes are planned with an assumption of max over subscription to any core facing link, and the number of links being received from the distribution network or edge is well-understood; and therefore the max potential traffic demand.


<<   <   Page 4 / 10   >   >>
Sign In