& cplSiteName &
Comments
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 6 / 10   >   >>
OldPOTS
OldPOTS
12/5/2012 | 3:35:17 PM
re: FCC's Martin Is Ready to Pounce
seven,

My point was to let the applications interface to signal the BW needed for it's communication. The signalling interface is irrelevant, other than you need one that accepts all the needed parameters. This includes QoS parameters. Initially certain kinds of IP QoS might work as long as congestion is only infrequently encountered. But as technology/cost limits BW, better Fair Sharing methods are needed. That must be described not only by TM methods that allocate dedicated and shared BW, but that identify the traffic characteristics (best effort, delay or cell/packet loss sensitivity).

Note that to fairly share the networks BW in a network experiencing congestion, each node in the network Must reserve resources (switching and queueing methods and capacity [buffering]) to support the total differing traffic types passing through that node. That includes shared and dedicated resources.

OP

PS - I liked PNNI because it used routing protocols to search the most efficient network(s) path (connectionless) and signaling SVC/SPVCs from the application that could use/request both best effort and real time video and would cause Call resource acceptance at each node in the paths.
sgan201
sgan201
12/5/2012 | 3:35:16 PM
re: FCC's Martin Is Ready to Pounce
All,

The perfect is the enemy of the good.
by Voltaire quotes

Yes, I know ATM. And, I love the deterministic nature of ATM. But, if you have a network that has only one congestion point where everywhere else is cheaper to upgrade bandwidth by adding more hardware than manage it, why work so hard?? Many networks are like that.

For FTTH / DSL / Cable Modem/ 3G / 4G type of access network, doesn't the network behave this way??

Dreamer

fgoldstein
fgoldstein
12/5/2012 | 3:35:16 PM
re: FCC's Martin Is Ready to Pounce
Oldpots,

Thanks for the interesting post. I started working on ATM back at T1D1, years before Cisco got involved (they were still a startup). The assumption at the time was that it would be SVCs; Q.2931 (code name Q.93b) was the "interim" signaling but a "transaction-based" (more like TCAP than ISUP) protocol might eventually replace it. As you point out, by the time the Forum took the lead (five years later, around 1991), Cisco and the fundies saw ATM as a PVC data link below IP, period, just a way to provision links.

However, the other ATM killer was how easy it was to bill for. The Internet "happened" -- became public -- just as the ATM boom was happening, and the "usage is free" model beat the telecom industry's business model, one in which a workable billing paradigm for ATM may have been unachievable.

But technically, you're describing something quite reasonable. I've written it up as a "switched technology network", a parallel development to "internets" (connectionless packet networks, including DECnet and IPX). STNs allow the application to request whatever they want, by QoS and capacity. I was of course a fan of ABR for data, which fits the internet business model best into STNs, with the network telling the user what's available and thus negligible loss. BTW orange hose Ethernet worked that way; carrier sense was a form of flow control, and it rarely lost packets. IP and switched "Ethernet" (the only kind for the past 15 years or so) are packet-discard, non-flow-controlled networks.

And yes, dreamer's repetitive posts demonstrate a lack of understanding of the subtleties of the problem. Congestion management is not trivial at all, and should not be left to politicians or ideologues.
stephencooke
stephencooke
12/5/2012 | 3:35:15 PM
re: FCC's Martin Is Ready to Pounce
Hi Dreamer,

I am all for simple solutions to complex problems but I can't see yours yet. Here is a simple & extreme example (that is happening quite a lot these days if you believe what many carriers are saying):

Your network has a core & a bunch of BRASs in the metro and some links from the core to other carriers. Initially the network is operating well with no congestion. Then a single user requests the entire porn library of Serbia in HD via Bit Torrent (I have no idea if this exists but you get the point). The requests themselves are not too large and get past any buffering as the network is not congested. Then a flood of traffic hits every access point to the core from other carriers, and possibly some locals as well, that overwhelms the network core.

As I understand your implementation the core will signal to all the BRASs that it is congested and buffers will be enabled on every user in the network - upstream & downstream. Huge amounts of traffic from other carriers, and possibly the buffered traffic of a few locals, will hammer the core continuously but will be dumped at the BRAS that faces the offending user as his buffer is overflowing. The rest of the users of your network cannot get any traffic through the core as it is overwhelmed. How did buffers in the BRAS help this situation for the rest of the network's users?

Thanks,

Steve.
fgoldstein
fgoldstein
12/5/2012 | 3:35:15 PM
re: FCC's Martin Is Ready to Pounce
Dreamer, there are multiple congestion points. On cable, there's the upstream, there's the CMTS output, there's sometimes an aggregation router between head ends, and there's the backbone. Every hop can congest. You can move bottlenecks but they are an inherent part of the connectionless architecture.

There is no one agreed-upon meaning of "fairness" A policy manager would be able to enforce different options.
OldPOTS
OldPOTS
12/5/2012 | 3:35:15 PM
re: FCC's Martin Is Ready to Pounce
fred,

Billing is do-able, but--

I ran a multinode "Fast Packet" network (precursor to FR & ATM) that operated much like ATM except it had quick routed PVCs (Voice and data) and a ~50 node Async mux network (56kb) that auto re-routed upon single copper T1 errors.

When we merged the Async network onto the Fast packet along with the smaller IP traffic and voice (Class 1 & 2) traffic. We created a billing system to 'fairly' allocate those shared network costs. Actually billing was based on dedicated and shared BW with pricing for priority congestion handling and priority re-route. All billable and based on Async mega byte counts and for each FP "switched" circuit administratively establishment at about 20 per day based on various priorities required.

That was before Windows on PCs and 10 meg disc, only microprocessors and IBM. So I don't think billing is impossible, only that the billing should be simple and evolved to meet needs if congestion becomes necessary. We did, with user guidance, over time simplify the billing system as they learned to share costs better than deal with the large complex billing reports.

I remember user meetings with Bell Labs (carrier sponsered) where they were surprised at the amount of data that would be required to do all the ATM accounting. But I now have that amount of speed and disc space on my smallest PC.

BTW There was an overkill ATM billing system devised on paper by Bell Labs and some development of proposed standards. I'm not sure whether it ever became part of the ATM Forums Network Management standards.

But I know that the Frame Relay carriers/vendors did develop billing systems for constrained FR SVCs. Billed by access link speed and a choice of ~10 SPVC speeds from list. The core networks had more BW than the sum of access BW.

OP

I was once privy to a Tier 1, month long successful test of a multinode (>9 nodes) PNNI network for SVC/SPVC error and congestion handling and accounting retrieval, where there were many skeptics and anxious vendors.
paolo.franzoi
paolo.franzoi
12/5/2012 | 3:35:15 PM
re: FCC's Martin Is Ready to Pounce

dreamer,

How are you solving the flood into your core from OTHER providers? You have not discussed your solution for that.

seven
fgoldstein
fgoldstein
12/5/2012 | 3:35:14 PM
re: FCC's Martin Is Ready to Pounce
Some basics on congestion...

In an IP network, intermediate systems (routers) normally do not need to keep track of flows. They have absolutely no way to determine "fairness" other than treating all packets equally. Some exceptions occur when QoS bits are allowed, but those are not the norm on the backbone or between ISPs. At a peering point, there could be many thousands of flows at a time.

Routers in such cases generally maintain one buffer per output. When the buffer's full, it does what it has to -- discard packets until there's room. It doesn't worry about whose packet it is that gets discarded; it's connectionless, after all, and the middle is supposed to act "stupid".

TCP is supposed to adapt to that. Seeing a lost packet, it knows the network has zero buffers left at *some* spot along the way, but it has no idea where. So it cuts its window to one packet. ANY hop can congest. If everybody ran TCP, it'd work. Streaming, however, is UDP, and doesn't slow down, so it takes proportionately more resources.

BitTorrent gets a bit more "clever". While it uses TCP, it creates multiple sessions, so when one is congested, there are others going on, and even a window size of one is multiplied by the number of sessions. So that user gets more.

NAT adds another wrinke. It takes multiple users and puts them behind one address, so that they look, to intermediate systems, like one user. Some ISPs NAT their subscribers, so multiple subscribers look like one. IP-based "fairness" breaks such systems and creates more demand for IP addresses.

IPv6 does NOT solve that, as it only works when everything is IPv6, and thus parallel addresses are required for at least a decade, which is a good reason to try again and do something better than IPv6 (tastes bad, more filling).
sgan201
sgan201
12/5/2012 | 3:35:14 PM
re: FCC's Martin Is Ready to Pounce
Stephencooke,

Let's take your example. Congestion happen. At where the congestion is happening, traffic shaping is activated proportional to the user subscribed bandwidth. A fixed buffer size per user is used. All user still get their fair share of bandwidth. The abuser get most of his frames thrown away. The TCP congestion algorithm kicked in for him and he was forced to slow down. Meanwhile, the user that do their normal web browsing is unaffected. This probably will kick in less than 500ms.

In typical DSL / Cable / FIOS/ 3G / 4G network, there is a place where it is over-subscribed 200 times. And, that is where you enforce this. Given that this is where the congestion will happen first. This enforcement will provide feedback shutdown the flow of abuser's traffic.

Dreamer
fgoldstein
fgoldstein
12/5/2012 | 3:35:13 PM
re: FCC's Martin Is Ready to Pounce
dreamer, you're missing the point. Routers do NOT have buffers per user!

A router at the backbone doesn't care about users. It looks at the destination and shoves the packet into the ONE output buffer going towards the next hop. Since there are thousands or conceivably millions of parallel TCP streams passing through a core router, you'd be positing thousands of buffers. Hmmm, that's megabytes of buffer. How many seconds of latency per hop do you tolerate? Packets slower than letter post?

ATM does CBR with very small reserved buffers, typically eight cells per VC. That works because the input is very steady, not bursty, and this traffic never gets stuck behind bursty traffic; cells are interleaved. It does UBR with common buffers, no per-VC reservations. IP has longer packets that arrive in bursts. It's a whole different animal.

This is like the IMS fallacy, trying to force IP to do something it is not suitable for, simply because PHBs and know-nothings on Wall St. assume that if it's eye pee, it has to be good.
<<   <   Page 6 / 10   >   >>


Featured Video
Upcoming Live Events
October 22, 2019, Los Angeles, CA
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events