& cplSiteName &
Comments
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 16 / 18   >   >>
spelurker
spelurker
12/5/2012 | 4:08:26 AM
re: QOS Fees Could Change Everything
> can the routers, any of them, run at 100% utilization on all
> of their ports, all the time?

It's a statistical game. Rarely do traffic patterns line up so that happens. (If a highway is 4x as wide as an access road, does it have exactly 4x the traffic in each direction?) But if you throw the bandwidth at them they can handle it. Weird congestion effects do sometimes show up between ~95% and 100% utilization. Since today's links are effectively all point to point, the problems are derived from desired output bandwith temporarily bursting over 100%, and bouncing back down because of individual TCP sessions backing off.
Typical link utilization varies per location.

> Regarding your comment on all possible forwarding decisions...
> how can this be true when it is possible to physically move IP addresses,

The forwarding tables are actually pretty static.
IP routes are advertised throughout the internet, and when they get far enough from the source, they are summarized (rather than getting separate routes to Boston & New York, everyone west of Buffalo gets told to head east on I-90 for all destinations in the Northeast). Since all forwarding is done until the next hop, any one router only needs to know how to get to a router which it knows is closer to the destination. So each router normally knows all it needs to know to route a packet anywhere in the world.
However, if a nearby link fails there is a race condition between the failure event and the forwarding table update, during which time, data could be misdirected.

Mobile devices usually have all their data tunneled through a common location, limiting their exposure to routing problems. I do not know enough about the cell network itself to comment on what pitfalls lie between the access network and the metro.

spelurker
spelurker
12/5/2012 | 4:08:26 AM
re: QOS Fees Could Change Everything
This is a bit of a distraction from the main topic, but...

> The highest clock rate in transmission equipment and routers is the line rate of the interfaces.

This doesn't really apply. Packet pushing boxes may use, say 10Gb/s on their I/O, but almost always have a significantly faster switching fabric. (A POS interface is serial, but while a router's internals might run at only 500MHz clock rate, they're passing data 32bits wide, giving them 16Gb/s of processing capability)

> As long as the on-chip memory has been setup (either
> by recent previous packets to the same address, or via a
> connection set-up function) and the re-directing can be entirely
> done via H/W alone, the line rate can be maintained.
> Once S/W gets involved ... the actual data throughput falls below the line rate.

In practice, software does not need to get involved.
In a core box, forwarding is 100% hardware, and usually deterministic, because all possible forwarding decisions have been set up based on IP destination + policy constraints.
In an edge box, new connections generally get learned during the IP connection setup. Most protocols need to do some handshaking to start up and are moving slowly anyway. Especially TCP. By the time the IP sessions are ready to use their full bandwidth, the edge boxes are ready for them.

So with today's IP boxes and LAN/WAN topologies, the effective line utilization gets MUCH higher than we were all taught was possible during the 10base-T days.
OldPOTS
OldPOTS
12/5/2012 | 4:08:24 AM
re: QOS Fees Could Change Everything
Several routers and ethernet switches at the DSLAM/OLT are capable of operating at at least 10G. Most of these can do very deep packet inspection. A couple of years ago they were not yet at a price point ($xk) that operators would place as CPE on site. But these have now been produced in much larger volumes, and I assume have been cost reduced. But this is why operators must go after those $100/m customers that buy premium services.

BTW - These also have great capabilities to enforce SLAs, but you may want to place them where you have heavy traffic generators and let those 486s control/limit the rest. Those at the DSLAM/OLT can monitor/alarm when somone becomes a heavy traffic generator.

OldPOTS

PO
PO
12/5/2012 | 4:08:24 AM
re: QOS Fees Could Change Everything
"People are allocated 20Kbps. But, it is not guaranteed. How much would it costs to make those 20Kbps GUARANTEED as opposed to only ALLOCATED?? My contention is it does not costs that much."

Others have addressed this, but the question still comes back. Let me take another stab at this.

We need to consider each direction of traffic flow in a few different scenarios.

Let's start with outbound TCP traffic from a subscriber, across a DSL link and the Greater Internet, to a high-capacity site in a completely uncongested network.

TCP is rate adaptive, and wants to find the upper limit to the bandwidth available. But what does this really mean? It means that packets will queue up to traverse a slow link, and will zip along other (faster) links. In our scenario, the slowest link is the DSL uplink, so packets will queue up at the DSL modem until its buffer overflows, indicating (roughly) that the maximum available end-to-end bandwidth is being used for that flow.

If we start a second TCP flow across that same congested link, they'll generally each get half of the available bandwidth. (Assuming the TCP stack is correctly implemented; there are some which do not behave as well as others.)

Now if we start to add a moderate level of congestion at various points along the paths of those TCP flows, we'll start to add additional points where packets might be queued (delayed), though not yet dropped. So although the round-trip time (RTT) might increase slightly, the bandwidth for the TCP isn't decreased.

Then if we add more congestion, we'll start to drop packets at some other point(s) along the path, with the most heavily congested point weighing the most in the determination of how much bandwidth the TCP session will rate-adjust to.

Somewhere between those last two scenarios, the network starts to take proactive action, using tools such as Random Early Discard (RED) to prevent against something known as congestion collapse, or perhaps Explicit Congestion Notification (ECN). We won't worry about congestion collapse, though, beyond knowing that RED is a good thing.

Where is that congestion likely to occur? At a point in the network which the operator is economically incented to maintain at maximum occupancy. That is, the most expensive part of the network, which expense the operator doesn't want to waste. Typically, that'll be the Network Access Point (NAP).

But what if there's "excess capacity" in the rest of the network, so that all the access traffic arrives at the NAP only to have some of it dropped there due to congestion? The carrier has wasted money transporting those packets across the network, and has possibly forced other packets to drop which otherwise could have been delivered.

So it's in the carrier's economic interest, and in the interest of the traffic, for the access devices to be congested (concentrated) at a level comparable to the congestion at the NAP.

(A further grace is often available in that transport bandwidth is typically symmetric, and there is typically a greater demand for download bandwidth, as DSL and cable link capacities are much higher in the downstream direction.)

And for the transport network lying between the access and the NAP to not be overly congested either: losing packets in that section of the network wastes access and NAP resources.

Let's turn our attention to the return path, to traffic arriving at a subscriber's DSL link from some far-away high-capacity site. In an uncongested network, the "skinny pipe" is still the downlink of the DSL head-end, and packets queue for transmission. Eventually the buffer overflows and TCP rate-adapts to that effective bandwidth.

Again, we start to add moderate congestion across the network and find the packets-in-flight queuing at different points along the path, without significantly changing the Bandwidth-Delay Product.

Additional delay creates a new "choke-point" and the effective bandwidth for that TCP flow is reduced, with excess capacity on the DSL link available to other TCP flows.

If we start to add additional TCP flows, however, they will each adjust their bandwidth accordingly, and the bandwidth of the first flow will be reduced.

But again we note that it is in the economic interests of the carrier to maintain the overall level of congestion at the DSL head-end at a comparable level to the congestion at the NAP: paying for buffers that are never used is wasteful, as is paying for capacity that is never used.

What about traffic that doesn't behave well? That doesn't rate-adapt? Well, quite simply, it gets crappy service: it'll experience excessive packet loss, and high delay. And it makes life worse for everyone else, too: it squeezes out other traffic which otherwise could have been successfully delivered.

Now, what about that "guarantee" of, say, 20Kbps? To what purpose? If the traffic will still rate-adapt down to a fair-share allocation based on other congestion points in the network, how much good would the guarantee be? And if the traffic doesn't rate adapt, it'll still be lost at those other congestion points.

And it raises the networking conundrum: which is better? 100 TCP flows across a 20Kbps link, or 1 TCP flow across that same 20Kbps link? Different scenarios will treat them differently, because the Internet embeds a certain concept of "fairness" to traffic management, which percolates through TCP/IP (and UDP), and BGP routing, among other internet protocols.

Of course, networks typically have thousands of subscribers and multiple NAPs to choose from, but I would still stand by the concepts above for capacity management.

This doesn't say that QoS proposals are either a good thing or a bad thing: I've never made any such judgement. All it does is begin to outline the vast scope of issues which need to be addressed whenever the issue of QoS comes up.

I hope it helps.
PO
PO
12/5/2012 | 4:08:24 AM
re: QOS Fees Could Change Everything
"Bottom line is that comparing telecom bandwidths with actual IP data throughput numbers is not a fair fight. Unless vendors put beefier (ie: faster than line rate) processors on their boxes transmission efficiencies will not significantly improve from what they are now, as far as I can see."

This is a different question, of course. But it's still missing significant context.

Packets have various headers, and a "content" (protocol data unit, or PDU) section. It's generally only parts of the headers that processors have to look at, so the question becomes one of packet rate (or, inversely, packet size at line rate). On the internet, you'll see a few popular sizes around 64 bytes, around 512 bytes, and around 1500 bytes. The most stress for the processor is all 64-byte packets, at line rate.

For lower line rates (10, 100Mbps) you can get cost-effective processors to do the work. For higher line rates, or more involved inspection capabilities you would typically look to some sort of hardware assistance (FPGA, DSP, similar solutions).

There has historically been a tension in equipment design around this point: you can overbuild your equipment to handle 64-byte packets at line rate, then never see such a scenario outside the lab; or you can build to cost targets and underperform in the lab evaluation.

Most vendors today will find the additional costs acceptable and will build to line rate.
PO
PO
12/5/2012 | 4:08:24 AM
re: QOS Fees Could Change Everything
"So with today's IP boxes and LAN/WAN topologies, the effective line utilization gets MUCH higher than we were all taught was possible during the 10base-T days."

10Base-T was always capable of pretty close to line rate. (There's an inter-frame gap, but other than that you can go to town.) Even 10Base-2 was capable of getting near line rate.

What confused a lot of people was a theoretical study that showed a possible floor to the upper limit at 1/e (37%) utilization. A lot of people misread that, and assumed this was a best-case saturation point.

That myth was put to bed in 1988, with a report from DEC's Western Research Labs (DEC-WRL 88/4; Google). And that was still in the days of CSMA/CD: very few people use hubs today, so the topology has changed to almost exclusively a point-to-point model.
BigBrother
BigBrother
12/5/2012 | 4:08:23 AM
re: QOS Fees Could Change Everything
There are many network procesors out there that can take in 10G traffic and split them into smaller pieces. That is why Intel is also getting into the NP game. Also a lot of the boxes that do packet inspection that uses software and FPGA combination, they can almost achieve full line rate in most cases except the extreme 64 bytes but then most of the flows are bigger than 64 bytes. Non-software solution is faster but there are their limit, changes are slow and more expensive.
sgan201
sgan201
12/5/2012 | 4:08:22 AM
re: QOS Fees Could Change Everything
PO,

1) I think you missed a very key point on my proposal. What I am suggesting is per user/subscriber level queueing and traffic management. During network congestion, each user/subscriber will be traffic shaped to their guarantee rate and each subscriber has its own queue/buffer. So, people substantially overused their 20Kbps will have their buffer overrun and lose a lot of packets. Meanwhile, for a person that only use 20Kbps, he/she will not lose any packets even under severe network congestion. This is what I called fairness from the customer standpoint. Why should a well behaved customer suffer from the abusive behavior of some customers.

2) The major difference between a SOHO user and a consumer user is that a SOHO user would like to have a guarantee levell of some bandwidth so that they can do their job/business at all time.

3) The amount of per user/subscriber/connection buffer required for this is not high at all. And, it had been implemented on some low cost boxes.

Dreamer

Mark Seery
Mark Seery
12/5/2012 | 4:08:21 AM
re: QOS Fees Could Change Everything
Steve,

Important improvements in Ethernet efficiency / performance have included:

-moving from Manchester encoding to 4B/5B and 8B/10B (which you may be familiar with).

-moving from many devices per logical segment (hub or cable) to two(for example point to point)

-the addition of full-duplex operation (not present in early versions of Ethernet)

-increase in speed of course

-cut through switching (switching before entire frame received).

So it was not simply enough to go to point to point links, the addition of a full-duplex mode was vitally important to getting to the kind of performance you are used to seeing. Manchester encoding was very inefficient which did not matter so much on coax, but did on twisted pair (I have been told). As late as the mid-90's I was using Ethernet networks that were only getting 30% utilization (I know this from personal verification with analyzers) due to collision recovery and many users on the same segment; but though you might read about such things, most people consider this so far in the distant past, that your original comments did not resonate.

If you are using a full duplex Ethernet link to a customer's switch or router, you should see pretty good performance. Also note that the minimum frame size on Ethernet is 64 bytes, so 40 byte TCP acks (a common source of small packets) have to be slightly padded out (after adding IP header) so there is a small amount inefficiency there in addition to the previously mentioned interframe gap (which is very small at high speeds). This small amount of inefficiency may also impact the transfer of IP packets from a line card to a switching fabric when using a fixed size cell (for example 64 bytes). Some designs will exhibit corner cases around many small packets, and hence "hero tests" while unrealistic, test for this corner case (smallest possible packet size at wire speed on as many interfaces as possible). There are many different packet sizes on a typical network, so while TCP acks are not infrequent, they are not so frequent as to make a test consisting just of this packet type a reflection of real world conditions. Some people refer to something called a "IMIX" which comes in simple forms (a few packet sizes) or more complex forms with a greater variety in packet sizes.

Which is all just a long winded way of saying that many Ethernet switched and IP routed networks today exhibit low latency, low jitter, and high utilization (note some theory suggests that utlization over 70% will lead to queues becoming deeper - and hence capacity managers may plan to avoid this; compare and contrast this to a TDM or ATM VC network where capacity is unused because each individual channel is not fully utilized from a offered payload perspective by its client signal/traffic demand).
stephencooke
stephencooke
12/5/2012 | 4:08:21 AM
re: QOS Fees Could Change Everything
Thanks for this Mark, and others. It is appreciated.

Steve.
<<   <   Page 16 / 18   >   >>


Featured Video
Upcoming Live Events
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events