x
Policy + charging

'Up To' No Good

11:10 AM -- Two little words are undermining the credibility of broadband service providers.

"Up to."

Those words have been part of virtually every broadband service sold over the last 15 years, which is about as long as broadband has been around. Whether they refer to available bandwidth from a cable modem service, which shares bandwidth among subscribers within a given area, or the variable-rate DSL offering from a telco, the reality of broadband is that virtually no one has guaranteed download and upload speeds.

Instead, they have sold a service that delivers "up to" a certain speed.

Now UK regulator Office of Telecommunications (Oftel) is complaining about the "up to" reality of broadband services there, having apparently just noticed that broadband consumers aren't necessarily getting all the throughput for which they are paying. (See ISPs Shamed by UK Broadband Speed Tests.)

Certainly, research that shows a stunning gap between the advertised bandwidth and what DSL providers are actually delivering should be a wakeup call for UK consumers. But I suspect many who are paying for a 24-Mbit/s package and only getting 6.5 megs are actually already maxing out their PC's ability -- the scam here is in convincing the consumers that 24 megs of throughput actually does them any good.

Maybe what is needed is a marketing crackdown that limits how inflated "up to" claims can be. The alternative is to require broadband service providers to deliver a more guaranteed data rate, and that can be a tricky proposition.

Most broadband services involve shared resources of some kind, and the availability of those resources will always vary with the number of active users and the type of use. Thus the available bandwidth will also vary, and the best approach remains to deliver all that is possible at a given time, which today's variable rate services already do.

There are ways to offer guaranteed services. A pricing scale can be devised, and enforced via policy controls, that lets consumers pay for exactly the bandwidth they want. Those who want a higher guaranteed throughput can simply pay more and get it.

In the US, that kind of pricing scheme is likely to generate howls of protest from Internet users who simply want more bandwidth without the higher cost.

So what's a broadband service provider to do?

Combining tiered offerings with the ability to get greater data throughput when it's available is one approach that makes sense, but the telecom industry will have to do a serious sales job to convince regulators and consumers of the value of such an offering.

Tiered data plans are likely to gain traction first in the wireless world, where AT&T Inc. (NYSE: T) has already boldly ventured forth with data tiers, and others are likely to follow suit. Will they ever fly on a mass-market basis for wireline services?

I think that's still "up to" debate.

— Carol Wilson, Chief Editor, Events, Light Reading

Page 1 / 2   >   >>
Jeff Baumgartner 12/5/2012 | 4:28:41 PM
re: 'Up To' No Good

This post reminded me that I haven't checked my broadband usage in a while, so I was curious to get an update now that we're doing alot more Netflix streaming in this household.  Comcast uses a bit counter to verify usage, and I'm still well under the 250-gig ceiling they've set on "excessive use" -- I'm at 32 GB so far in July, so I've used just 11% of my allotment before they start to crack down.  That's up from 24GB in May, and 27GB in April, so the cap is pretty generous based on my current needs and usage levels.


But the bit counter doesn't factor in the important questions that are raised here, such as average speeds and how they relate to the advertised max speeds. Broadband ISPs will just have to be careful applying metering because theoretically the faster the tier you sign up for the faster the meter will run. JB

cnwedit 12/5/2012 | 4:28:41 PM
re: 'Up To' No Good

Interesting perspective - I hadn't thought things through quite like that.


Part of the problem with our theoretical discussions is that very few people know what they use in terms of bandwidth, so pay-per-use after the fact might damage broadband ISP revenues, but could also create bill shock for heavy users.


I do think the industry needs some kind of level-set for usage. In addition, service providers need to be preparing for a world in which they can offer service packages based on actual consumption that factor in  time-of-day and peak load pricing.


Then they need to give consumers options for occasionally exceeding their limits for a fee. I


In other words, we need pricing plans that are the exact opposite of the all-you-can eat prepaid broadband plans of today.

karsal 12/5/2012 | 4:28:41 PM
re: 'Up To' No Good

Hi Carol,


Very nice article so thanks for it. I have been thinking about this pricing issue for some time. So, I wanted to give my two cents on it. Sorry, if it is long.


I agree with you that guaranteed speeds do not constitute a severe pricing problem and yes: tiering can solve that price. After all, duration is known; speed is known, and the to-be-carried load can be known: so pricing is easy. What makes unguaranteed speeds different from guaranteed speeds is how uncertainty should be priced because that is the main property of internet traffic. This, I believe, is a problem of post vs. prepay as much as it is tiering vs. non-tiering.


As far as my limited knowledge of economic theory goes, uncertainty is best managed with short-term contracts and ex-post payments depending on the outcome. In the case of uncertainty of traffic (thus speed), the unit of analysis for pricing should be "load carried" (e.g. MBs) and it should be paid ex-post with a per MB price having been contracted for as short a duration as possible. This is how you get closest to paying what you actually used. But, this is in contrast with pricing based on "speed promised" (e.g. Mbps) to be paid ex-ante, which is usually the case today.


Surely, operators won't like this because they're used to selling Ferrari speedometers for the traffic of Mumbai, but end users are waking up. They know they're not getting what they're promised. If operators started getting paid based on the loads they carry and after they actually carried the loads instead of speed, they would have to constantly improve their network throughput. This would also constitute natural competition so that they would improve their networks much before congestion became unbearable.

Jeff Baumgartner 12/5/2012 | 4:28:40 PM
re: 'Up To' No Good

I think the ISPs are going to have to give a closer look at monetizing consumption in high traffic times, though weaning people off the all you can eat at any time model will be tough because they'll have to adjust off of what they're used to now.


And offloading some of that to times of day where there's smaller usage makes sense if consumers can be convinced to plan ahead.  Or perhaps there's a turbo button market that the broadband ISPs can pursue. But unless forced to do otherwise, i find it hard to believe that they'll migrate to any models that end up reducing overall margins on their broadband products.  But it's certainly a business model that needs to be studied because consumers should know what they're paying for, and pay for what they actually get.  JB

cnwedit 12/5/2012 | 4:28:40 PM
re: 'Up To' No Good

Good point, Jeff. But it raises another question. What's more important to monetize, the overall consumption in a given month, or the amount of bandwidth being consumed during peak times, when the network is subject to congestion?


I've always thought it made more sense for a broadband ISP to offer free bandwidth for video downlaods that occur in the wee hours of the morning, and restrict bandwidth hogs durnig prime time. You can use all the Netflix you want as long as you do it when the network isn't congested.


 

paolo.franzoi 12/5/2012 | 4:28:36 PM
re: 'Up To' No Good

 


So, when FiOS started rolling out this was a HUGE problem.  There were many challenges with getting a speed test to validate a 15 Mb/s service even if everything was working properly in the carrier.  This included things like:


- Viruses


- Spyware


- Improper IP stack configuration


- Internet "Distance" from the Speed Test site


- Poorly built Ethernet cables forcing connections into half duplex


So, I recognize the frustration here for consumers.  I also recognize the problem for service providers.  I can tell you that the issues I listed caused people to often come nowhere near the speed offered by the network. 


Who is going to take responsibility for that part of things? 


seven

Duh! 12/5/2012 | 4:28:33 PM
re: 'Up To' No Good

Seven,


This gets back, in part, to last week's discussion.


You very correctly point out that it is extremely difficult to measure performance from the Network Interface to the peering point (or CDN server) in isolation from impairments in the endpoints, backbone and home network.  In large part, this is an artifact of TCP.  Even if IP headers contained sufficiently precise and accurate timestamps to give reasonably good delay measurements,   rate measurement would be, at best, an estimate:  pretend that the access network were the only contributor to RTT, and calculate "best case" rate based on that.


The meta-problem is that best effort service (and its variants including capped best effort) describe not the absolute rate, delay and delay jitter offered to the individual data flow, but rather the rate offered to the individual data flow relative to other data flows at the bottleneck along the path.   Where the relative rate is something roughly congruent to a fair share, and noting the ongoing controversy over what "fair share" means.   It is not even possible, without the benefit of admission control, to calculate a lower bound on the fair share rate at any bottleneck in a network.  That's a good part of the reason why BitTorrent created such a mess.  But we also learned that most applications (other, obviously, than video and audio) tend to be elastic, and thus better matched to best effort services than reserved ones. 


So, as much as I've thought about these kinds of things over many years, I don't have a good answer.   Clearly, a GPON shared amongst 32 subscribers is going to yield better busy hour performance than DOCSIS 3.0 in most currently deployed configurations, but it's extremely difficult to quantify. 

paolo.franzoi 12/5/2012 | 4:28:29 PM
re: 'Up To' No Good

 


Duh!,


I understand your concerns here, but I think there are definitely three separate networks:


1 - The Internet


2 - The ISP


3 - The Premise


Theoretically, there are 4 with a separation between the broadband access supplier and the ISP but practice these are 1 and the same in the US.


The ISP consists of all the items between the Peering Point and the Premise and is a single business entity in responsibility.  Even if it is an Earthlink, it (in my mind) has taken responsibility for the quality of a wholesale DSL connection.


From that standpoint, I believe it is certainly possible to quantify issues and metrics that lie in the ISP domain.  It IS possible to generate a lower bound on bandwidth depending on the implementation of a best effort network.  For example, there are scheduling algorithms that guarantee a minimum amount of lower priority level traffic gets scheduled.  There are ways of putting hard limits on the high prioirity traffic.  These can occur on a session basis (aka like a run time admission control) or a configuration basis (limiting the amount of bandwidth allowed to high priority traffic). 


What that would mean is that the entire network has to be engineered to deliver an amount of bandwidth per sub that is very different than it is today beyond the access. 


It is this reason that people hate Bittorrent.  They engineered the networks (and thus priced the bandwidth) on a usage model that looks like web site surfing or e-mail.  Now that there is a lot of video and p2p traffic the models are broken.  The reaction to date is to start putting in caps to limit the use of machine to machine traffic.  It is a simpler alternative to redesigning the network to accomodate this.


But now here we are running higher and higher bit rates and are getting more and more limited on our ability to use that bandwidth.  Simply a broken model.


 


seven

Duh! 12/5/2012 | 4:28:24 PM
re: 'Up To' No Good

Seven,


Unless the network is flow aware (c.f., the nostalgic talk about ATM last week), with some kind of admission control,  the network can't guarantee a minimum rate per flow.   The scheduling algorithms of which you speak operate on a per-flow basis. More to the point, a user who wants more than their "fair share" need only open parallel flows (BitTorrent is the reductio ad absurdem of this, as it used positive feedback to open more connections during periods of congestion).  The IP/Ethernet based network we have now depends on purely statistical mechanisms like RED to provide a modicum of fairness amongst flows, the number of which are unbounded.  That is just one example of the unfairness inherent in the existing network, although also the most blatant.


Researchers, especially Bob Briscoe and Matt Mathis,  have been doing interesting work in capacity sharing architectures that explicitly accept that "TCP friendly" is not fair.   See http://trac.tools.ietf.org/gro...

Duh! 12/5/2012 | 4:28:23 PM
re: 'Up To' No Good

Seven,


That only works if what your're trying to measure is the performance of the segment (LIS, in IP terms) between the RG and the edge router. Which does not include the metro and regional IP networks.   It also isn't a measurement that would have a lot of utility to end users.

Page 1 / 2   >   >>
HOME
Sign In
SEARCH
CLOSE
MORE
CLOSE