& cplSiteName &

Pervasive Video Could Change Internet Fundamentals

Carol Wilson
7/10/2014
100%
0%

As expected, consultant engineer Peter Sevcik's report placing the blame for the great Netflix slowdown of 2014 on Netflix itself has proven a bit controversial with Light Reading readers, who have enjoyed taking their potshots at him (and me) since the story came out on Tuesday. (See Netflix's Problem Is Its Transit Network – Report.)

At particular issue is Sevcik's claim that Netflix Inc. (Nasdaq: NFLX) traffic only needs 2 Mbit/s of last-mile bandwidth, as an average, to stream video into the home. He derives that figure from measuring the actual performance of Netflix movies over five representative ISPs, including a DSL service provider, two cable companies and two fiber providers, one being Google Fiber Inc. Sevcik, who as president of NetForecast Inc. does engineering consulting for ISPs, also used a variety of movie players at the end in his research, to include a slow PC and very high-end connected TVs.

What he learned was that while the initial phase of the download -- filling the buffer of the video -- could be affected by slow speed access, once the movie started playing, that bandwidth consumption dropped to the levels Sevcik cites -- 2 Mbit/s and below.

Sevcik raises a bigger issue, however, in viewing video distribution in the broader context of the Internet's evolution as a delivery mechanism for a growing variety of complex material. Given his involvement in the early days of the Internet development, I think these thoughts are worth sharing.

So, setting the debate over who is slowing down Netflix video traffic aside for the minute, consider instead what the bigger lessons are from a discussion of how video is delivered over the Internet.

As Sevcik notes, one of the attractions of the Internet at the outset – or for that matter, packet-switching in general beginning back in the X.25 days – is that it is such an extremely efficient way to move data around that networking became distance-insensitive.

"Voice-over-IP hit the phone companies hard because suddenly they couldn't charge for the long-distance part of the call, especially for international calls," Sevcik comments.

That began to change a bit as the Internet took off and people were downloading the same content at very high volume, making it inefficient for that content to be stored at a great distance, and giving birth to Akamai Technologies Inc. (Nasdaq: AKAM), Limelight Networks Inc. (Nasdaq: LLNW) and others, who developed content delivery networks that could identify and cache popular content much closer to the end user than before. Netflix was an early user of CDNs, namely Akamai. Telcos even developed their own CDNs for caching over-the-top video . (See Telco CDNs Make OTT Tolerable.)

But as Netfix traffic has grown to represent about one-third of bandwidth in use as a given time -- note that doesn't mean one-third of all Internet capacity, just the bandwidth in use -- as measured by Sandvine Inc. (London: SAND; Toronto: SVC)'s regular reports, even better solutions are needed.

"Once you go to video and a lot of it, once you are at one-third of bandwidth being consumed in the evening, then you need a different strategy," he says.

Netflix Inc. (Nasdaq: NFLX) has, in fact, engaged in such as strategy, moving first to its own Open Connect CDN, then offering to embed Netflix CDN servers directly in the data centers of the ISPs, and then directly connecting its Open Connect CDN service to a broadband ISP, namely Comcast. (See Comcast-Netflix Peering Deal: A Game-Changer? and Netflix Touts New Content Delivery Network.)

"Two of the three strategies are used to circumvent the business problem, particularly Open Connect, so the distance to the users is so much shorter," Sevcik states. "This is a wake-up call to everyone. I have not given much thought to whether distance will make a difference. It does make a difference in response time and responsiveness, if you want to see web pages paint quickly and render quickly."

And that difference could become increasingly important, as content continues to get richer and as more of it moves into the cloud, where we will all expect to access apps and data on demand from mobile devices as well as those connected to fixed broadband networks.

"We are seeing a shift in the economics here, the economic question of the distance is now coming to the foreground which had not been here before," Sevcik notes. "This will impact any applications that want to take true advantage of extremely high speed circuits."

If there is a perceived problem involving flows that are essentially 2 Mbit/s, as reflected by Sevcik's performance analysis, then there are major issues lying ahead.

"What happens if you really do start wearing phones at 1 Gig a second and start building apps that want to use 1 Gig and you can't fill a 1-Gig pipe at any reasonable distance away from the destination?" he asks. "All services will have to be localized, which is kind of backwards from where the Internet started."

And if everything does move to the cloud, that cloud will be living right next door -- or, in cloud terms, hovering everywhere.

That's a very different model from how things are currently seen.

— Carol Wilson, Editor-at-Large, Light Reading

(7)  | 
Comment  | 
Print  | 
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Infostack
50%
50%
Infostack,
User Rank: Moderator
7/14/2014 | 11:29:08 AM
The more things change....
... the more they stay the same.

Network principles do not change.  They've been around for hundreds and thousands of years; only we just think this is all new.  It's not.

First, there are tradeoff's between layers 1-3 (physical capacity, data rate, switching) that involve distance, # of users, capacity required, time, etc...  These tradeoffs are governed by layer 4 protocols, but also impacted by what occurs in layer 5-7.  Basically this debate is occuring because the narrowband store and forward folks (or those who deny that the data pimple scaled on the voice elephant's butt in the 1980s-90s) don't have answers to broadband, 2-way, and increasingly real-time (or on-demand) challenges.

What goes on in layers 5-7 defines and governs marginal demand.  Nothing is average.  And (marginal) supply (capex and opex) achieves ROI based on pricing of expected marginal demand ex ante.  This is the $64,000 issue which escapes most and isn't being discussed.

Second, is the fact that with metcalfe's law, value accretes at the core geometrically, while costs mostly collect at the edge in a linear fashion.

What escapes most as well, is that the principle of metcalfe's law works and should hold across and between networks or agents (east-west vertical boundaries) or between apps/content and infrastructure (north-south between layers) when it comes to policy and business models that can foster universal service.  This necessitates balanced settlements which provide price signals and incentives.

Which means having these discussions leads us to a a paradigm of value-based pricing which the core providers (OTT) are applying to their business models today since they have a holistic or complete view of demand.  Conversely the vertically integrated edge access providers (ISPs) remain rooted in inefficient, vertically integrated, cost-based pricing models.  They only have a partial view of demand ex ante.  Netflix knows and can anticipate all the demand in and around Springfield MA for instance; something the edge providers can't do. 

Thirdly, there is immutable technological change (aka moore's) occuring at evey layer and boundary point of the converging informational stack; even as demand is diverging.  So the performance/price spread between the two models is growing 20-30% annually.  Netflix was able to start taking advantage of this about 7 years ago, so their scale has improved by 2-3 decimals since they werw a DVD only company.

The decimal is shifting naturally 1 place to the left to transport and switch a voice equivalent minute everywhere every 2-3 years.  In the WAN (core), after 30 years of competition, this number stands at $0.0000004, while the non-competitive MAN (edge access) stands at $0.001.  This spread is not sustainable.  Google's fiber model has proven the number can easily move 2 places left to $0.00001, even if Google's model is not fully scaled the way I believe it can.  Ultimately there should just be 1-2 decimal place difference between WAN and MAN.  

For comparison purposes 1-way video is 2 places to the right, while both 2-way HD video collaboration and 1-way 4K video is 3 places to the right.  Again, every 3 years the decimal shifts back one place to the left.  That's the real race the ISPs are bowing out of.  So a new business model and/or paradigm is necessary for the edge.  Not the core!

So, everything being discussed here is a confusion of the above principles and datapoints.

I am puzzled that a renowned networking guru suddenly wakes up and says "distance matters" and that "this is a wake-up call" to everyone.  Hello?  CDNs developed 17 years ago to fill a gap in a 1.5-way public business model (aka the www or the internet) that had no settlements, yet was consuming capacity voraciously as demand exploded.  It's a recognition that tradeoffs between storage, processing, transport, switching have always existed, but only become apparent AFTER volumes make them appear so (see my last line below).

Distance has and always will matter, whether it was 1-way road or water networks 2000 years ago or information networks 600 years ago, or the first digital/electronic networks 170 years.  It certainly mattered 100 years ago with the first synchronous 2-way networks when AT&T put in a 50 mile demarc in the Kingsbury commitment.  Now why do you think they did that?!?

So as you quite rightly say Carol, it is incumbent on us to have the discussion of where and how these tradeoffs should occur as the rapidly approaching demand freight trains of 4K VoD, 2-way HD video collaboration, mobile screen first, and IoT approach and plague the current networks and business models with capacity (particularly upstream), latency, QoS, security, and redundancy issues.  All of these can be modeled and arrived at with the same WAN/MAN constructs/formulations I point to above.  Imporantly, with mobility and 2-way, real-time video, they are all interdependent and not mutually exclusive as the FCC continues to maintain.  Time for a holistic and objective framework to quantify and analyze all these moving pieces.

In conclusion, what Comast and the other large edge access providers are trying to do is simply shift (or hold) the WAN/MAN demarc to or near the core.  Netflix scaled off over everyone else's competitive driven growth and tradeoffs in layers 2-4 (transport or CDN; which the author does a relatively poor job of explaining) between 2009-2013.  Now everyone else should be benefiting from pushing Netflix' intelligence to the edge to assure the above 4 trends scale rapidly, cost effectively and ubiquitously.

But that's not happening for at least 70% of the market because it turns out the real reason for this issue is that Netflix' model is disruptive to their LinearTV monopoly.  Isn't it odd that the expert's charts only start in September last year when the Net Neutrality arguments were heard and it was apparent that the FCC would lose?  In general I found the analysis to be extremely poor and his conclusions inconsistent and out of touch with what is really going on with all the consolidation and convergence/divergence that is occurring and which has only become apparent to a few folks over the past 12 months.

Michael Elling
Carol Wilson
50%
50%
Carol Wilson,
User Rank: Blogger
7/10/2014 | 9:25:07 PM
Re: video over ip performance
Okay, I understand what you are saying, brookseven, and it makes sense. That's why the word "cloud" no longer has any real meaning. 
brookseven
50%
50%
brookseven,
User Rank: Light Sabre
7/10/2014 | 6:03:08 PM
Re: video over ip performance
Carol,

I don't like your cloud wording in your post here.  Mind if I rephrase?

"The fact that residential customers are consuming large amounts of streaming video content is creating problems.  These problems extend to Enterprise Customers that are using Public Internet Connected Applications and Storage (aka "The Cloud").  There are long term potential challenges for Enterprises that want to use this public connectivity to provide ubiquitous availability of information."

The reason I am providing this is that from a residential standpoint all the web is in "The Cloud" and always have been from that standpoint.

seven

 
VictorRBlake
50%
50%
VictorRBlake,
User Rank: Moderator
7/10/2014 | 6:00:57 PM
Re: Distance
Carol -- thanks for elaborating. I do disagree that someone distributing copies of content is different in any way from the original goals of the Internet both for its original military purposes as well as its commercial adapations to what we call the Internet today. The architecture of the Internet does not require or suggest one copy broadcast to all. In fact the copy, store, forward concept (aka CDNs and cached content) -- what you call "close" -- was always part of the architecture. Look at one of the oldest and earliest IP applications --  email -- (SMTP). This is how it was designed from the beginning.

I remember hearing this same argument about VoIP -- how we were going to have to "change the architecture of the Internet" (I'm not quoting you here -- just the classic comments i heard). Nonsense, VoIP turns out to be less traffic on IP networks than SNMP. It's piddly. Sure video is more than voice, but in the scope of things it doesn't fundamentally require any changes in how IP networks. In fact, the facts show the opposite. It works quite fine over IP -- thus the suprising (to people who didn't think it would work or work well) adoption rate.
Carol Wilson
50%
50%
Carol Wilson,
User Rank: Blogger
7/10/2014 | 5:44:25 PM
Re: video over ip performance
To be clear, "fundamentals" was a headline word - not Peter Sevcik's. And to brookseven's point, he has been clear in saying the local loop is not the problem. 

But what he is saying here is that the original ability of the Internet to connect us efficiently to any content wherever it is, is being changed by the pervasiveness of video and by the fact that a lot of content is moving into the "cloud" - which for this purpose means it's moving off enterprise servers and expected to be accessible by remote devices of many different kinds. 

If we are all expecting to access content that goes well beyond 2 Mb/s streams, then a lot of that content is going to be localized, to use his words. And that is a fundamentally different picture from the Internet of the past. 
VictorRBlake
50%
50%
VictorRBlake,
User Rank: Moderator
7/10/2014 | 5:38:12 PM
video over ip performance
Video over ip performance is very dependent on the protocols used. When it is TCP based, there can be very serious performance impacts when window sizes are decreased after packet loss. UDP (which the bulk of the OTT apps are) performs far better because -- as you describe -- a filled buffer can do as it should -- smooth out the bumps if you will in burstiness. Packet loss is highest in areas of the network where the is the most congestion. In my experience that can really be anywhere in the network. But when you have performance problems that are TOD related, it isn't atypical for those to be on the source. When netflix gets hit with peak demand all at the same time -- they have to (as i am sure they do) design their network for "peak." The problem with that is of course that it is EXPENSIVE. So providers like netflix will try to do things like spread out the load by increasing buffer sizes in the players (clients). This is very helpful and efficient.


But i don't see how this changes any "fundamentals" of the Internet. A 2Mbps is basically peanuts. Obviously if you have 50 or 100M 2Mbps streams it adds up. As you know -- all of these OTT solutions use unicast transport. What's obviously more efficient (which cable operators use for broadcast -- not for VOD) is IP multicast. There remain huge problems with multicast -- primarily managing it across ASNs.  But even if we do see some magical transition to multicast -- that does not "change internet fundamentals" because multicast has been around for a seriously long time. Most modern approaches use a "CDN" which is tech speak for "some other system for distributing the content in advance" and then distributing it from closer to the edge. There are a lot of ways to do this. But that's a discussion in itself -- but still doesn't change any internet fundamentals because we've been doing cdns and distributed caching of various kinds for like 15 years now (we were doing streaming video on the Internet before it was called the Internet in the late 1980's).
brookseven
100%
0%
brookseven,
User Rank: Light Sabre
7/10/2014 | 5:28:57 PM
Distance
Carol,

Distance is not probably a great value word here.  

Packet Switching worked because it was designed for transactional style interactions (Web - E-mail).  Packet Switching existed well before X.25, it came out of the world of mainframe processors.  The vast bulk of traffic in those days came from Airlines, Banks, Financial Services and Insurance companies.  Airline Reservations was a classic application of these architectures.

The bandwidth that has been built into the Internet is based around that model.  Ask a Tier 1 ISP this question:  "How many bits per second do you have allocated per user?"  Then take that 2Mb/s number and compare it to your answer.  The local loop side is not the problem...its across the peering and in the core.

Video is not a transactional model.  It is streaming - like voice.  But voice is so low bandwidth that nobody cares (just like IoT - hey I want 1 kb/s forever whoopdee do).  Now imagine that every Comcast sub right now wants 10Mb/s.  That is like 600 x 10 to the 12th bits per second.  Even if they want 100K its a big number.  See the problem?

Now with video, the problem has been Unicast video (IPTV generally uses IGMP as a "fix").  There is some huge gain by locating popular video close to the source.  But now let's assume that EVERY Comcast customer wants a 1Mb/s unique video stream (I am choosing 1 and 10 for easy Math).  Here in Santa Rosa, we have about 50K homes and let's give Comcast 50% market share (I like easy numbers).  So 25K video streams can't come out of one server...now its a server farm.  Broadcast video is looking sweet at this point right?  Another way around it is to trickle content down to subs and have them store it locally (imagine a STB with 50TB of storage). 

That is why I argue the local loop is NEVER EVER the problem.  Filling ALL the local loops ALL the time is the problem.  The whole structure was never built for it.  Why do you think we can get a DS3 bandwidth at way less than the cost of a T-1?

seven

 
More Blogs from Rewired
FCC proposes forcing broadband ISPs to get explicit consent from customers before using browsing data, ignoring the real online privacy threats.
Forget the 'Thrilla in Manila'; the competitive battle among open MANO groups will more likely be fought with Manila folders than fisticuffs.
Done well, targeted video ads based on Internet browsing history wouldn't be such a bad thing, but there are trust and transparency issues to address.
Virtualization is critical to telecom's future but it could open up a new can of net neutrality worms – the time to talk about that is now.
The rise of business videoconferencing has been a long time coming, but can integration with UC and low-cost, fully featured systems finally take hold?
From The Founder
Download our complete guide to de-risking NFV deployment in 2016, including:
  • An eight-step strategy to deploying NFV safely, based on input from the companies that have already started virtualizing their production networks.
  • Interviews with leading executives at Colt, AT&T, Deutsche Telekom, Cisco, Nokia, ZTE, Ericsson and Heavy Reading.
  • Flash Poll
    Live Streaming Video
    Prepping for the Future: Upskill U Explained
    During this short kick-off video, Doug Webster, Vice President of Service Provider Marketing, Cisco, and Light Reading’s CEO & Founder Steve Saunders give an overview of Upskill U.
    LRTV Interviews
    AT&T Expert on the Key Pillars of UC

    4|29|16   |   03:58   |   (0) comments


    Vishy Gopalakrishnan, AVP of product marketing at AT&T, talks about the three developments that are making unified communications and collaboration secure and reliable for enterprise users.
    LRTV Documentaries
    LRTV Report: Mobile Core Innovation

    4|28|16   |   25:32   |   (0) comments


    Hear from multiple industry experts from Deutsche Telekom, SK Telecom, Heavy Reading, Huawei, Cisco, Ericsson, Nokia, NEC and many more about developments in the mobile core as operators virtualize their IMS and evolved packet core systems and prepare for a 5G world.
    LRTV Huawei Video Resource Center
    NFV World Congress Highlight

    4|26|16   |     |   (0) comments


    The highlight of the NFV World Congress contains exciting telecom news. Join us for an inside look at Huawei's ICT 2020 plan and its latest collaboration with industry leaders.
    LRTV Interviews
    Unified Comms Finds Its Voice

    4|25|16   |   03:44   |   (0) comments


    Peter Quinlan, VP of UCC Product Management at Tata Communications, talks about the evolution of the unified communications and collaboration services sector and how voice is now a big part of current developments.
    LRTV Documentaries
    So... What Do We Do Now?

    4|25|16   |   03:24   |   (0) comments


    After a long hiatus, Max Dingman, the CEO of a GeeGhiz, returns for a motivational board room pep talk.
    LRTV Documentaries
    NAB 2016 Highlights

    4|21|16   |     |   (0) comments


    Light Reading's Cable/Video Practice Leader Alan Breznick climbs down from the slots to tell us about the latest news in broadcast technology at NAB 2016 in Las Vegas.
    Between the CEOs
    CEO Chat: Deepfield's Craig Labovitz

    4|21|16   |     |   (0) comments


    In this latest installment of the CEO Chat series, Craig Labovitz, co-founder and CEO of Deepfield, sits down with Light Reading's Steve Saunders in Light Reading's New York City office to discuss how Deepfield fits in with the big data trend and more.
    Shades of Ray
    Leading Lights 2016: Shortlists Announced

    4|20|16   |   0:53   |   (0) comments


    The judging is over and the Leading Lights 2016 shortlists have been published -- you can see who made the cut by clicking on this link.
    LRTV Custom TV
    Introducing MulteFire – Qualcomm at MWC 2016

    4|18|16   |   3.29   |   (0) comments


    MulteFire is the latest option for using LTE in unlicensed spectrum. As oppose to its close 'siblings', LAA and LTE-U, MulteFire operates solely in unlicensed spectrum, which enables it to offer the best of two worlds – LTE-like performance with WiFi-like deployment simplicity. In this interview, Sanjeev Athalye, Sr. Director, Product Management at Qualcomm ...
    Between the CEOs
    CEO Chat: Grant Van Rooyen of Cologix

    4|18|16   |     |   (0) comments


    Grant van Rooyen, president and CEO of Cologix, sits down with Steve Saunders, founder and CEO of Light Reading, in the vendor's New Jersey facility to offer an inside look at the company's success story and discuss the importance of security in the telecom industry.
    LRTV Huawei Video Resource Center
    ONS 2016 – Demonstration of Huawei's NetMatrix Multi-Vendor SDN Orchestrator

    4|15|16   |     |   (0) comments


    This demonstration shows how Huawei's NetMatrix SDN Orchestrator (SDN-O) addresses an operator's core service agility needs for services spanning multi-domain, multivendor networks: it includes a demonstration of:
    - Rapid New Service Design: using YANG to model a complex example of multi-domain, multivendor L3VPN network connectivity service that ...
    LRTV Custom TV
    AT&T Wants to Own North Carolina

    4|15|16   |     |   (1) comment


    Venessa Harrison, president of North Carolina for AT&T, tells how the company will expand its GigaPower service beyond the seven N.C. cities it already serves.

  • This blog, sponsored by AT&T, is the second part of a ten-part series examining next-generation broadband technologies titled "Behind the Speeds."
  • Upcoming Live Events
    May 23, 2016, Austin, TX
    May 23, 2016, Austin Convention Center
    May 24-25, 2016, Austin Convention Center, Austin, TX
    September 13-14, 2016, The Curtis Hotel, Denver, CO
    December 6-8, 2016,
    June 16-18, 2017, Austin Convention Center, Austin, TX
    All Upcoming Live Events
    Infographics
    A new survey conducted by Heavy Reading and TM Forum shows that CSPs around the world see the move to digital operations as a necessary part of their overall virtualization strategies.
    Hot Topics
    Ultra-Broadband Summit, Hong Kong
    Iain Morris, News Editor, 4/27/2016
    WiCipedia: Woman Cards & Bitch Switches
    Sarah Thomas, Director, Women in Comms, 4/29/2016
    FCC Poised to Re-Regulate Wholesale Access
    Carol Wilson, Editor-at-large, 4/28/2016
    Mitel Asks: What Time of Day Do You Shower?
    Mitch Wagner, West Coast Bureau Chief, Light Reading, 4/25/2016
    GoT Fans Curse HBO (Not Right) Now
    Mari Silbey, Senior Editor, Cable/Video, 4/25/2016
    Like Us on Facebook
    Twitter Feed
    BETWEEN THE CEOs - Executive Interviews
    In this latest installment of the CEO Chat series, Craig Labovitz, co-founder and CEO of Deepfield, sits down with Light Reading's Steve Saunders in Light Reading's New York City office to discuss how Deepfield fits in with the big data trend and more.
    Grant van Rooyen, president and CEO of Cologix, sits down with Steve Saunders, founder and CEO of Light Reading, in the vendor's New Jersey facility to offer an inside look at the company's success story and discuss the importance of security in the telecom industry.
    Animals with Phones
    Live Digital Audio

    Of all the tech companies in the Valley, Intel has made the most aggressive commitment to building a diverse and inclusive workplace culture. It's doing so by taking concrete, measurable steps, making a large financial investment and through a commitment to complete transparency about its progress. In this radio show, WiC Director Sarah Thomas will be joined by Shlomit Weiss, Intel's Vice President, Data Center Group, and General Manager of Networking Engineering, who will share with us why Intel is tackling this huge challenge, how and to what effect. She will also discuss her unique experiences leading development of Client SOC development in the past and today leading development of all of the chipmaker's silicon hardware for networking IPs and discrete devices and managing a team of 600 engineers across Israel, Europe and the US.