& cplSiteName &
Comments
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
<<   <   Page 8 / 10   >   >>
fgoldstein
fgoldstein
12/5/2012 | 3:35:06 PM
re: FCC's Martin Is Ready to Pounce
RJ, let me try to explain it, though frankly I think Stephen is doing a good job.

P2P apps have both upload (file servers, usually in violation of your ToS, but we can't be bothered by technicalities like that) and download functions. A BRAS chokes traffic at one point a couple of hops back from the subscriber. (And BRAS is usually an ILEC DSL construct; cable doesn't always use it.)

So let's say I'm running BitHog on my PeeCee. I request the new Batman movie. Bits of it are scattered among many users, so my client opens connections to 25 of them. Now those users, from Pakistan to Paraguay to Paramus, are throwing bits at me. Yes, my TCP will slow down if packets are lost, but I've got 25 TCPs in parallel, so I'm keeping up the average rate!

The load on the global Internet is thus not only at my ISP's network, but at those people's networks, on the backbone, on the links from my ISP's BRAS to the backbone, and past the BRAS. The BRAS, assuming one is there, will be getting hit by lots of packets from afar. It may throw most of them out, but even in slow-start mode, 25 parallel connections is a lot of traffic.

And upstream is bad for a different reason. Yes, I can be throttled, but doing so means that my oversubscription rate is limited, and thus my burst speed is constrained. When I join the BitTornado CDN seeding the latest RadioFoot album or Passaic Chainsaw Massacre HD video, I'm within my burst rate, but (especially on DOCSIS) upstream is limited so I'm displacing other applications.

Don't assume, btw, that a clever programmer won't work around TCP's slow-start. There's always UDP, or a buggered TCP. The 1337 crowd will quickly sing the praises of something that makes them get more speed.
paolo.franzoi
paolo.franzoi
12/5/2012 | 3:35:05 PM
re: FCC's Martin Is Ready to Pounce

rjs/dreamer,

The others have hit on this but where you are missing things I think is the assumption of symmetry in the connections and storage. There isn't any.

The whole idea of P2P is to simultaneously have many sessions running to lots of servers sending you information simultaneously. If I use World of Warcraft as an example, I get downloaded from Blizzard's host. BUT I also connect to anywhere from 15 - 50 other folks sending and receiving traffic. As I have said, I have looked and they are not near me on the IP address front. So, they are sending traffic to me all at relatively modest rates. BUT the aggregate is big. Now the issue is that the network was designed today for transaction oriented web traffic. Click and load at human speeds.

So, even if my downloads are not massive in and of themselves the network is designed for me to occupy it very lightly. Metro networks are 100s to 1 oversubscription. Core networks are 1000s to 1 oversubscription. So even 1% of customers using P2P blow up the model for EVERYONE.

Now, personally I think this is the fault of the business model and the network design. But it is the situation today and limiting traffic at the edge alone will not solve it.

seven
OldPOTS
OldPOTS
12/5/2012 | 3:35:05 PM
re: FCC's Martin Is Ready to Pounce
Fred's #64 nails it.
"..there are all sorts of theoretical ways to aggregate things. Whether they are effective when one user or application process is actively trying to circumvent them is a different question."

And the aggregations are old and patched...
"Networking technology is in a serious rut, stuck trying to coax more and more out of 30-year-old "good enough" IP." And the 20 year old ATM was designed to fix many of these problems but also over-corrected by committee. And then came MPLS and follow on overlays.

But the problem still exists without a BW grabber limiter. Fred was right that even any good replacement for any of these can be circumvented by an user's rogue application process. So are FCC application police necessary? Maybe DPI is the policing mechanism %^&*(#$%^&*()_?

I think only a simple billing approach for application usage solves the problem, similar to what Frame Relay did - access speed plus packets sent/received. This could be tiered, much like cell phones. That way the high user's pocketbook either self reduces traffic (rjs's pee cee) or the network operators makes enough $$$ profit to obtain BW and nodes to support that fetish quest for parallel speed.

OP
nodak
nodak
12/5/2012 | 3:35:04 PM
re: FCC's Martin Is Ready to Pounce
After reading many of the more recent posts, this discussion has taken an interesting turn. After all of the discussion in the past of "fraudband" connections provided today, it seems this discussion is pointing towards the congestion in the core networks (most post) and not at the edge (mostly dreamer).

My only involvement in this area is limited to reading articles and posts here, so forgive my ignorance on all of this. How will Deep Packet Inspection stop congestion on the core any better than Dreamer's proposal of creating buffers? It seems to me core congestion is still going to happen for at least brief periods of time until DPI software resets the P2P connection. It would seem you would need a consistent monitoring policy on the entire Internet to help reduce core congestion. Each ISP coming up with their own solution is not going to work. As much as I do not like the idea, I think the only reasonable way to work this and maintain some privacy is to go towards a cell phone model where you buy so much access per month and pay a large sum if you go over. The only real question then becomes do the network operators use that money to try and improve their networks and reduce the over subscription rates or do they pocket it and ignore the problem. This still does not solve the core capacity issue, since there does not seem to be any motivation for them to upgrade (who is paying for it?).

Just some thoughts from an IP ignorant optical engineer.
sgan201
sgan201
12/5/2012 | 3:35:04 PM
re: FCC's Martin Is Ready to Pounce
Seven,

1) Obviously, you know your network better than we do. But, conceptually, this is limiting the user down to the network sustainable rate when congestion happen. This gives other users a fair chance to send their traffic. This got to be better than what you are having now.

2) The network that I am looking at has no congestion at the core. Ditto on the metro network. It can have congestion on the access side. So, it could work in this network.

Dreamer
fgoldstein
fgoldstein
12/5/2012 | 3:35:03 PM
re: FCC's Martin Is Ready to Pounce
Nodak, good post. WRT one question you asked,

> How will Deep Packet Inspection stop congestion on the core any better than Dreamer's proposal of creating buffers?

What DPI is potentially used for here is blocking or partially jamming applications that are considered, in general, to be capacity hogs. It's a generalization, of course, but in Comcast's case, a Sandvine box was jamming BitTorrent seeding because seeding, in general, is hard on the network. It's a surrogate for worrying about buffers per se. AT&T Mobility is more blunt; they disallow all P2P. Verizon Wireless is blunt: They permit web browsing and email; all else is forbidden. When you replace the Internet with a walled garden of known low-cost-to-provide applications, you do conserve resources, albeit at massive loss of value.

So yes, DPI can do the job; however, to quote R. Milhouse Nixon, "but it would be wrong".

And the Martin/Ammori Freak Show is a fraud of Nixonian proportions.
stephencooke
stephencooke
12/5/2012 | 3:35:03 PM
re: FCC's Martin Is Ready to Pounce
Hi RJS, Dreamer,

It appears that you are assuming a session-based (or user-based) feedback mechanism that is activated on these user buffer overflows in the access network that would slow down the P2P servers, wherever they are...? If this is the case, why isn't this mechanism activated when the DSL or DOCSIS access pipe to that user overloaded (i.e.: without the buffers)?

Steve.
nodak
nodak
12/5/2012 | 3:35:02 PM
re: FCC's Martin Is Ready to Pounce
Thanks for the response, but if Comcast is blocking the packet request or sending a reset, does that stop the far end user packets from traveling the core? If the user on Comcast keeps trying to initiate the download, will this not still tie up core bandwidth for at least a few seconds each time?
fgoldstein
fgoldstein
12/5/2012 | 3:35:01 PM
re: FCC's Martin Is Ready to Pounce
The reset is sent to the distant user, the same way a modem RAS might disconnect sessions in progress when the modem hangs up. So the far end does normally stop sending packets for that TCP stream. Comcast was jamming uploads, not downloads, so the process was being initiated by the non-Comcast user. The Comcast user was, by seeding, making known that it had certain material that could be requested.

What we now call P2P is not really new. Back around, oh, 1970, the ARPAnet was doing file transfer. Early-1970s FTP, which is the kind we still use (complete with support for the BBN Pluribus TIP print server, in the form of IP addresses in the application layer), allowed users to peruse remote directories and fetch files. P2P as a meme basically goes back to Napster, which added a search engine to the mix. So a Napster application (basically an FTP client and server) proactively uploads its music directories (not just filenames) to a search engine, which the clients in turn query.

BitTorrent goes two better by distributing the search engines (thus harder to shut down on infringement grounds) and by parallelizing the file transfer (multiple chunks in separate FTP sessions).
rjmcmahon
rjmcmahon
12/5/2012 | 3:35:01 PM
re: FCC's Martin Is Ready to Pounce
re: "After all of the discussion in the past of "fraudband" connections provided today, it seems this discussion is pointing towards the congestion in the core networks (most post) and not at the edge (mostly dreamer)."

I measure 120Mbs TCP throughput between two internet hosts which separated by 2000 miles and nine L3 hops. The cost of theses hosts are $21 per month. I then measure my access TCP throughput to the closest host and get 0.5Mbs. This is a distance of 20 miles and seven L3 hops. The cost of access is $67 per month (when including taxes etc.) Over the last decade, the former has been increasing throughput while prices have been decreasing. The latter has been stagnant. It's folks that control the latter that are making the claims about a need to manage scarcity rather than increase capacity. It's also the latter that have a direct conflict of interest with their primary business, delivering video or over priced TDM circuits, with providing more bandwidth at a price that reflects a functioning market, aka the marginal cost to provide the service.

You be the judge of where the bottleneck is. This discussion got way off track a long time ago.
<<   <   Page 8 / 10   >   >>


Featured Video
Upcoming Live Events
November 5, 2019, London, England
November 7, 2019, London, UK
November 14, 2019, Maritim Hotel, Berlin
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events