As expected, consultant engineer Peter Sevcik's report placing the blame for the great Netflix slowdown of 2014 on Netflix itself has proven a bit controversial with Light Reading readers, who have enjoyed taking their potshots at him (and me) since the story came out on Tuesday. (See Netflix's Problem Is Its Transit Network – Report.)
At particular issue is Sevcik's claim that Netflix Inc. (Nasdaq: NFLX) traffic only needs 2 Mbit/s of last-mile bandwidth, as an average, to stream video into the home. He derives that figure from measuring the actual performance of Netflix movies over five representative ISPs, including a DSL service provider, two cable companies and two fiber providers, one being Google Fiber Inc. Sevcik, who as president of NetForecast Inc. does engineering consulting for ISPs, also used a variety of movie players at the end in his research, to include a slow PC and very high-end connected TVs.
What he learned was that while the initial phase of the download -- filling the buffer of the video -- could be affected by slow speed access, once the movie started playing, that bandwidth consumption dropped to the levels Sevcik cites -- 2 Mbit/s and below.
Sevcik raises a bigger issue, however, in viewing video distribution in the broader context of the Internet's evolution as a delivery mechanism for a growing variety of complex material. Given his involvement in the early days of the Internet development, I think these thoughts are worth sharing.
So, setting the debate over who is slowing down Netflix video traffic aside for the minute, consider instead what the bigger lessons are from a discussion of how video is delivered over the Internet.
As Sevcik notes, one of the attractions of the Internet at the outset – or for that matter, packet-switching in general beginning back in the X.25 days – is that it is such an extremely efficient way to move data around that networking became distance-insensitive.
"Voice-over-IP hit the phone companies hard because suddenly they couldn't charge for the long-distance part of the call, especially for international calls," Sevcik comments.
That began to change a bit as the Internet took off and people were downloading the same content at very high volume, making it inefficient for that content to be stored at a great distance, and giving birth to Akamai Technologies Inc. (Nasdaq: AKAM), Limelight Networks Inc. (Nasdaq: LLNW) and others, who developed content delivery networks that could identify and cache popular content much closer to the end user than before. Netflix was an early user of CDNs, namely Akamai. Telcos even developed their own CDNs for caching over-the-top video . (See Telco CDNs Make OTT Tolerable.)
But as Netfix traffic has grown to represent about one-third of bandwidth in use as a given time -- note that doesn't mean one-third of all Internet capacity, just the bandwidth in use -- as measured by Sandvine Inc. 's regular reports, even better solutions are needed.
"Once you go to video and a lot of it, once you are at one-third of bandwidth being consumed in the evening, then you need a different strategy," he says.
Netflix Inc. (Nasdaq: NFLX) has, in fact, engaged in such as strategy, moving first to its own Open Connect CDN, then offering to embed Netflix CDN servers directly in the data centers of the ISPs, and then directly connecting its Open Connect CDN service to a broadband ISP, namely Comcast. (See Comcast-Netflix Peering Deal: A Game-Changer? and Netflix Touts New Content Delivery Network.)
"Two of the three strategies are used to circumvent the business problem, particularly Open Connect, so the distance to the users is so much shorter," Sevcik states. "This is a wake-up call to everyone. I have not given much thought to whether distance will make a difference. It does make a difference in response time and responsiveness, if you want to see web pages paint quickly and render quickly."
And that difference could become increasingly important, as content continues to get richer and as more of it moves into the cloud, where we will all expect to access apps and data on demand from mobile devices as well as those connected to fixed broadband networks.
"We are seeing a shift in the economics here, the economic question of the distance is now coming to the foreground which had not been here before," Sevcik notes. "This will impact any applications that want to take true advantage of extremely high speed circuits."
If there is a perceived problem involving flows that are essentially 2 Mbit/s, as reflected by Sevcik's performance analysis, then there are major issues lying ahead.
"What happens if you really do start wearing phones at 1 Gig a second and start building apps that want to use 1 Gig and you can't fill a 1-Gig pipe at any reasonable distance away from the destination?" he asks. "All services will have to be localized, which is kind of backwards from where the Internet started."
And if everything does move to the cloud, that cloud will be living right next door -- or, in cloud terms, hovering everywhere.
That's a very different model from how things are currently seen.
— Carol Wilson, Editor-at-Large, Light Reading