Light Reading
With video content going everywhere, is the Internet's distance insensitivity being forever changed?

Pervasive Video Could Change Internet Fundamentals

Carol Wilson
7/10/2014
100%
0%

As expected, consultant engineer Peter Sevcik's report placing the blame for the great Netflix slowdown of 2014 on Netflix itself has proven a bit controversial with Light Reading readers, who have enjoyed taking their potshots at him (and me) since the story came out on Tuesday. (See Netflix's Problem Is Its Transit Network Report.)

At particular issue is Sevcik's claim that Netflix Inc. (Nasdaq: NFLX) traffic only needs 2 Mbit/s of last-mile bandwidth, as an average, to stream video into the home. He derives that figure from measuring the actual performance of Netflix movies over five representative ISPs, including a DSL service provider, two cable companies and two fiber providers, one being Google Fiber Inc. Sevcik, who as president of NetForecast Inc. does engineering consulting for ISPs, also used a variety of movie players at the end in his research, to include a slow PC and very high-end connected TVs.

What he learned was that while the initial phase of the download -- filling the buffer of the video -- could be affected by slow speed access, once the movie started playing, that bandwidth consumption dropped to the levels Sevcik cites -- 2 Mbit/s and below.

Sevcik raises a bigger issue, however, in viewing video distribution in the broader context of the Internet's evolution as a delivery mechanism for a growing variety of complex material. Given his involvement in the early days of the Internet development, I think these thoughts are worth sharing.

So, setting the debate over who is slowing down Netflix video traffic aside for the minute, consider instead what the bigger lessons are from a discussion of how video is delivered over the Internet.

As Sevcik notes, one of the attractions of the Internet at the outset or for that matter, packet-switching in general beginning back in the X.25 days is that it is such an extremely efficient way to move data around that networking became distance-insensitive.

"Voice-over-IP hit the phone companies hard because suddenly they couldn't charge for the long-distance part of the call, especially for international calls," Sevcik comments.

That began to change a bit as the Internet took off and people were downloading the same content at very high volume, making it inefficient for that content to be stored at a great distance, and giving birth to Akamai Technologies Inc. (Nasdaq: AKAM), Limelight Networks Inc. (Nasdaq: LLNW) and others, who developed content delivery networks that could identify and cache popular content much closer to the end user than before. Netflix was an early user of CDNs, namely Akamai. Telcos even developed their own CDNs for caching over-the-top video . (See Telco CDNs Make OTT Tolerable.)

But as Netfix traffic has grown to represent about one-third of bandwidth in use as a given time -- note that doesn't mean one-third of all Internet capacity, just the bandwidth in use -- as measured by Sandvine Inc. (London: SAND; Toronto: SVC)'s regular reports, even better solutions are needed.

"Once you go to video and a lot of it, once you are at one-third of bandwidth being consumed in the evening, then you need a different strategy," he says.

Netflix Inc. (Nasdaq: NFLX) has, in fact, engaged in such as strategy, moving first to its own Open Connect CDN, then offering to embed Netflix CDN servers directly in the data centers of the ISPs, and then directly connecting its Open Connect CDN service to a broadband ISP, namely Comcast. (See Comcast-Netflix Peering Deal: A Game-Changer? and Netflix Touts New Content Delivery Network.)

"Two of the three strategies are used to circumvent the business problem, particularly Open Connect, so the distance to the users is so much shorter," Sevcik states. "This is a wake-up call to everyone. I have not given much thought to whether distance will make a difference. It does make a difference in response time and responsiveness, if you want to see web pages paint quickly and render quickly."

And that difference could become increasingly important, as content continues to get richer and as more of it moves into the cloud, where we will all expect to access apps and data on demand from mobile devices as well as those connected to fixed broadband networks.

"We are seeing a shift in the economics here, the economic question of the distance is now coming to the foreground which had not been here before," Sevcik notes. "This will impact any applications that want to take true advantage of extremely high speed circuits."

If there is a perceived problem involving flows that are essentially 2 Mbit/s, as reflected by Sevcik's performance analysis, then there are major issues lying ahead.

"What happens if you really do start wearing phones at 1 Gig a second and start building apps that want to use 1 Gig and you can't fill a 1-Gig pipe at any reasonable distance away from the destination?" he asks. "All services will have to be localized, which is kind of backwards from where the Internet started."

And if everything does move to the cloud, that cloud will be living right next door -- or, in cloud terms, hovering everywhere.

That's a very different model from how things are currently seen.

— Carol Wilson, Editor-at-Large, Light Reading

(7)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View
Infostack
50%
50%
Infostack,
User Rank: Light Beer
7/14/2014 | 11:29:08 AM
The more things change....
... the more they stay the same.

Network principles do not change.  They've been around for hundreds and thousands of years; only we just think this is all new.  It's not.

First, there are tradeoff's between layers 1-3 (physical capacity, data rate, switching) that involve distance, # of users, capacity required, time, etc...  These tradeoffs are governed by layer 4 protocols, but also impacted by what occurs in layer 5-7.  Basically this debate is occuring because the narrowband store and forward folks (or those who deny that the data pimple scaled on the voice elephant's butt in the 1980s-90s) don't have answers to broadband, 2-way, and increasingly real-time (or on-demand) challenges.

What goes on in layers 5-7 defines and governs marginal demand.  Nothing is average.  And (marginal) supply (capex and opex) achieves ROI based on pricing of expected marginal demand ex ante.  This is the $64,000 issue which escapes most and isn't being discussed.

Second, is the fact that with metcalfe's law, value accretes at the core geometrically, while costs mostly collect at the edge in a linear fashion.

What escapes most as well, is that the principle of metcalfe's law works and should hold across and between networks or agents (east-west vertical boundaries) or between apps/content and infrastructure (north-south between layers) when it comes to policy and business models that can foster universal service.  This necessitates balanced settlements which provide price signals and incentives.

Which means having these discussions leads us to a a paradigm of value-based pricing which the core providers (OTT) are applying to their business models today since they have a holistic or complete view of demand.  Conversely the vertically integrated edge access providers (ISPs) remain rooted in inefficient, vertically integrated, cost-based pricing models.  They only have a partial view of demand ex ante.  Netflix knows and can anticipate all the demand in and around Springfield MA for instance; something the edge providers can't do. 

Thirdly, there is immutable technological change (aka moore's) occuring at evey layer and boundary point of the converging informational stack; even as demand is diverging.  So the performance/price spread between the two models is growing 20-30% annually.  Netflix was able to start taking advantage of this about 7 years ago, so their scale has improved by 2-3 decimals since they werw a DVD only company.

The decimal is shifting naturally 1 place to the left to transport and switch a voice equivalent minute everywhere every 2-3 years.  In the WAN (core), after 30 years of competition, this number stands at $0.0000004, while the non-competitive MAN (edge access) stands at $0.001.  This spread is not sustainable.  Google's fiber model has proven the number can easily move 2 places left to $0.00001, even if Google's model is not fully scaled the way I believe it can.  Ultimately there should just be 1-2 decimal place difference between WAN and MAN.  

For comparison purposes 1-way video is 2 places to the right, while both 2-way HD video collaboration and 1-way 4K video is 3 places to the right.  Again, every 3 years the decimal shifts back one place to the left.  That's the real race the ISPs are bowing out of.  So a new business model and/or paradigm is necessary for the edge.  Not the core!

So, everything being discussed here is a confusion of the above principles and datapoints.

I am puzzled that a renowned networking guru suddenly wakes up and says "distance matters" and that "this is a wake-up call" to everyone.  Hello?  CDNs developed 17 years ago to fill a gap in a 1.5-way public business model (aka the www or the internet) that had no settlements, yet was consuming capacity voraciously as demand exploded.  It's a recognition that tradeoffs between storage, processing, transport, switching have always existed, but only become apparent AFTER volumes make them appear so (see my last line below).

Distance has and always will matter, whether it was 1-way road or water networks 2000 years ago or information networks 600 years ago, or the first digital/electronic networks 170 years.  It certainly mattered 100 years ago with the first synchronous 2-way networks when AT&T put in a 50 mile demarc in the Kingsbury commitment.  Now why do you think they did that?!?

So as you quite rightly say Carol, it is incumbent on us to have the discussion of where and how these tradeoffs should occur as the rapidly approaching demand freight trains of 4K VoD, 2-way HD video collaboration, mobile screen first, and IoT approach and plague the current networks and business models with capacity (particularly upstream), latency, QoS, security, and redundancy issues.  All of these can be modeled and arrived at with the same WAN/MAN constructs/formulations I point to above.  Imporantly, with mobility and 2-way, real-time video, they are all interdependent and not mutually exclusive as the FCC continues to maintain.  Time for a holistic and objective framework to quantify and analyze all these moving pieces.

In conclusion, what Comast and the other large edge access providers are trying to do is simply shift (or hold) the WAN/MAN demarc to or near the core.  Netflix scaled off over everyone else's competitive driven growth and tradeoffs in layers 2-4 (transport or CDN; which the author does a relatively poor job of explaining) between 2009-2013.  Now everyone else should be benefiting from pushing Netflix' intelligence to the edge to assure the above 4 trends scale rapidly, cost effectively and ubiquitously.

But that's not happening for at least 70% of the market because it turns out the real reason for this issue is that Netflix' model is disruptive to their LinearTV monopoly.  Isn't it odd that the expert's charts only start in September last year when the Net Neutrality arguments were heard and it was apparent that the FCC would lose?  In general I found the analysis to be extremely poor and his conclusions inconsistent and out of touch with what is really going on with all the consolidation and convergence/divergence that is occurring and which has only become apparent to a few folks over the past 12 months.

Michael Elling
Carol Wilson
50%
50%
Carol Wilson,
User Rank: Blogger
7/10/2014 | 9:25:07 PM
Re: video over ip performance
Okay, I understand what you are saying, brookseven, and it makes sense. That's why the word "cloud" no longer has any real meaning. 
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
7/10/2014 | 6:03:08 PM
Re: video over ip performance
Carol,

I don't like your cloud wording in your post here.  Mind if I rephrase?

"The fact that residential customers are consuming large amounts of streaming video content is creating problems.  These problems extend to Enterprise Customers that are using Public Internet Connected Applications and Storage (aka "The Cloud").  There are long term potential challenges for Enterprises that want to use this public connectivity to provide ubiquitous availability of information."

The reason I am providing this is that from a residential standpoint all the web is in "The Cloud" and always have been from that standpoint.

seven

 
VictorRBlake
50%
50%
VictorRBlake,
User Rank: Lightning
7/10/2014 | 6:00:57 PM
Re: Distance
Carol -- thanks for elaborating. I do disagree that someone distributing copies of content is different in any way from the original goals of the Internet both for its original military purposes as well as its commercial adapations to what we call the Internet today. The architecture of the Internet does not require or suggest one copy broadcast to all. In fact the copy, store, forward concept (aka CDNs and cached content) -- what you call "close" -- was always part of the architecture. Look at one of the oldest and earliest IP applications --  email -- (SMTP). This is how it was designed from the beginning.

I remember hearing this same argument about VoIP -- how we were going to have to "change the architecture of the Internet" (I'm not quoting you here -- just the classic comments i heard). Nonsense, VoIP turns out to be less traffic on IP networks than SNMP. It's piddly. Sure video is more than voice, but in the scope of things it doesn't fundamentally require any changes in how IP networks. In fact, the facts show the opposite. It works quite fine over IP -- thus the suprising (to people who didn't think it would work or work well) adoption rate.
Carol Wilson
50%
50%
Carol Wilson,
User Rank: Blogger
7/10/2014 | 5:44:25 PM
Re: video over ip performance
To be clear, "fundamentals" was a headline word - not Peter Sevcik's. And to brookseven's point, he has been clear in saying the local loop is not the problem. 

But what he is saying here is that the original ability of the Internet to connect us efficiently to any content wherever it is, is being changed by the pervasiveness of video and by the fact that a lot of content is moving into the "cloud" - which for this purpose means it's moving off enterprise servers and expected to be accessible by remote devices of many different kinds. 

If we are all expecting to access content that goes well beyond 2 Mb/s streams, then a lot of that content is going to be localized, to use his words. And that is a fundamentally different picture from the Internet of the past. 
VictorRBlake
50%
50%
VictorRBlake,
User Rank: Lightning
7/10/2014 | 5:38:12 PM
video over ip performance
Video over ip performance is very dependent on the protocols used. When it is TCP based, there can be very serious performance impacts when window sizes are decreased after packet loss. UDP (which the bulk of the OTT apps are) performs far better because -- as you describe -- a filled buffer can do as it should -- smooth out the bumps if you will in burstiness. Packet loss is highest in areas of the network where the is the most congestion. In my experience that can really be anywhere in the network. But when you have performance problems that are TOD related, it isn't atypical for those to be on the source. When netflix gets hit with peak demand all at the same time -- they have to (as i am sure they do) design their network for "peak." The problem with that is of course that it is EXPENSIVE. So providers like netflix will try to do things like spread out the load by increasing buffer sizes in the players (clients). This is very helpful and efficient.


But i don't see how this changes any "fundamentals" of the Internet. A 2Mbps is basically peanuts. Obviously if you have 50 or 100M 2Mbps streams it adds up. As you know -- all of these OTT solutions use unicast transport. What's obviously more efficient (which cable operators use for broadcast -- not for VOD) is IP multicast. There remain huge problems with multicast -- primarily managing it across ASNs.  But even if we do see some magical transition to multicast -- that does not "change internet fundamentals" because multicast has been around for a seriously long time. Most modern approaches use a "CDN" which is tech speak for "some other system for distributing the content in advance" and then distributing it from closer to the edge. There are a lot of ways to do this. But that's a discussion in itself -- but still doesn't change any internet fundamentals because we've been doing cdns and distributed caching of various kinds for like 15 years now (we were doing streaming video on the Internet before it was called the Internet in the late 1980's).
brookseven
100%
0%
brookseven,
User Rank: Light Sabre
7/10/2014 | 5:28:57 PM
Distance
Carol,

Distance is not probably a great value word here.  

Packet Switching worked because it was designed for transactional style interactions (Web - E-mail).  Packet Switching existed well before X.25, it came out of the world of mainframe processors.  The vast bulk of traffic in those days came from Airlines, Banks, Financial Services and Insurance companies.  Airline Reservations was a classic application of these architectures.

The bandwidth that has been built into the Internet is based around that model.  Ask a Tier 1 ISP this question:  "How many bits per second do you have allocated per user?"  Then take that 2Mb/s number and compare it to your answer.  The local loop side is not the problem...its across the peering and in the core.

Video is not a transactional model.  It is streaming - like voice.  But voice is so low bandwidth that nobody cares (just like IoT - hey I want 1 kb/s forever whoopdee do).  Now imagine that every Comcast sub right now wants 10Mb/s.  That is like 600 x 10 to the 12th bits per second.  Even if they want 100K its a big number.  See the problem?

Now with video, the problem has been Unicast video (IPTV generally uses IGMP as a "fix").  There is some huge gain by locating popular video close to the source.  But now let's assume that EVERY Comcast customer wants a 1Mb/s unique video stream (I am choosing 1 and 10 for easy Math).  Here in Santa Rosa, we have about 50K homes and let's give Comcast 50% market share (I like easy numbers).  So 25K video streams can't come out of one server...now its a server farm.  Broadcast video is looking sweet at this point right?  Another way around it is to trickle content down to subs and have them store it locally (imagine a STB with 50TB of storage). 

That is why I argue the local loop is NEVER EVER the problem.  Filling ALL the local loops ALL the time is the problem.  The whole structure was never built for it.  Why do you think we can get a DS3 bandwidth at way less than the cost of a T-1?

seven

 
More Blogs from Rewired
It's up to network operators to make sure "open" retains some meaning and doesn't go the way of "natural" foods.
The move of NFV management and orchestration to an open source model seems inevitable, but how it happens is as transparent as the World Cup refs' game time calculations.
tw telecom has been a rumored takeover target for years and Level 3 a rumored suitor. But is there more to this deal than just getting bigger?
Of course they are all related to virtualization what else would I write about in Nice?
Flash Poll
LRTV Documentaries
Sprint's Network Evolution

7|24|14   |   14:59   |   (0) comments


Sprint's Jay Bluhm gives a keynote speech at the Big Telecom Event (BTE) about Sprint's network and services evolution strategy, including Spark.
LRTV Documentaries
BTE Keynote: The Software-Defined Operator

7|24|14   |   18:43   |   (1) comment


Deutsche Telekom's Axel Clauberg explains the concept of the software-defined operator to the Big Telecom Event (BTE) crowd.
Light Reedy
Numbers Are In: LR's 2014 Salary Survey

7|24|14   |   1:25   |   (6) comments


Our fourth annual Salary Survey paints a picture of who's hiring, firing, earning, and yearning for a change in the telecom industry.
LRTV Custom TV
Driving the Network Transformation

7|23|14   |   4:29   |   (0) comments


Intel's Sandra Rivera discusses network transformation and how Intel technologies, programs, and standards body efforts have helped the industry migration to SDN and NFV.
LRTV Custom TV
Distributed NFV-Based Business Services by RAD

7|18|14   |   5:38   |   (0) comments


With the ETSI-approved Distributed NFV PoC running in the background, RAD's CEO, Dror Bin, talks about why D-NFV makes compelling sense for service providers, and about the dollars and cents RAD is putting behind D-NFV.
LRTV Custom TV
MRV Accelerating Packet Optical Convergence

7|15|14   |   6:06   |   (0) comments


Giving you network insight to make your network smarter.
LRTV Custom TV
NFV-Enabled Ethernet for Generating New Revenues

7|15|14   |   5:49   |   (0) comments


Cyan's Planet Orchestrate allows service providers and their end-customers to activate software-based capabilities such as firewalls and encryption on top of existing Ethernet services in just minutes.
LRTV Custom TV
Symkloud NVF-Ready Video Transcoding, Big Data

7|9|14   |   3:41   |   (0) comments


Kontron and ISV partner Vantrix demonstrate high-performance video transcoding and data analytic solutions on same 2U standard platform that is ready for SDN and NFV deployments made by mobile, cable and cloud operators.
LRTV Huawei Video Resource Center
The Evolving Role of Hybrid Video for Competitive Success

7|4|14   |   4:09   |   (0) comments


At Huawei's Global Analysts Summit in Shenzhen, China, Steven C. Hawley from TV Strategies speaks to us about the evolving role of hybrid video for competitive success.
LRTV Huawei Video Resource Center
How CSPs Leverage Big Data in the Digital Economy

7|4|14   |   4:48   |   (2) comments


Justin van der Lande from Analysys Mason shares with us his views on how telecom operators can leverage customer asset monetization with big data. His discusses the current status of big data applications and the challenges and opportunities for telecom operators in the digital economy era.
LRTV Huawei Video Resource Center
Accelerator for Digital Business Future Oriented BSS

7|4|14   |   3:08   |   (0) comments


Mobile and internet are becoming intertwined; IT and CT are integrating; and leading CSPs have begun to transform to information service and entertainment providers. How should the BSS system evolve to enable this transformation? Karl Whitelock, an analyst at Frost & Sullivan, shares his views.
LRTV Huawei Video Resource Center
Orange Tunisia Discusses Multi-Band Antenna With EasyRET Solution

7|4|14   |   2:45   |   (0) comments


As new site acquisition becomes more difficult, Orange Tunisia has requested multi-band antenna to support UMTS and LTE innovation. Some things considered include reducing the cost of antenna maintenance and having high reliability antenna and EasyRET solution.
Upcoming Live Events!!
September 16, 2014, Santa Clara, CA
September 16, 2014, Santa Clara, CA
October 29, 2014, New York City
November 6, 2014, Santa Clara
November 11, 2014, Atlanta, GA
December 9-10, 2014, Reykjavik, Iceland
June 9-10, 2015, Chicago, IL
Infographics
Packet Design asks network professionals how they handle the cloud, SDN, and network management.
Today's Cartoon
Vacation Special Caption Competition Click Here
Latest Comment
Hot Topics
The Municipal Menace?
Jason Meyers, Senior Editor, Utility Communications/IoT, 7/22/2014
Cisco Puts a Fog Over IoT
Sarah Reedy, Senior Editor, 7/23/2014
Apple Earnings: Strong iPhone Sales, iPad Sales Slump, $7.8B Profit
Mitch Wagner, West Coast Bureau Chief, Light Reading, 7/22/2014
Salary Survey Report 2014
Sarah Reedy, Senior Editor, 7/23/2014
Facebook: 30% of Users Are Mobile-Only
Mitch Wagner, West Coast Bureau Chief, Light Reading, 7/24/2014
Like Us on Facebook
Twitter Feed