You Can't Fix OTT Streaming Problems If You Can't See Them

For today's Internet subscribers, "slow" is the new "down" when it comes to streaming speed and quality of over-the-top (OTT) content traffic. This isn't limited to fixed-line broadband; the same applies to mobile applications as well. Subscribers demand perfect streaming of video and online gaming sessions, or they'll leave for the available competition.

At first glance, the problem seems simple enough: Network congestion impacts quality. To fix this, bandwidth and caches need to be added. However, doing this without empirical data is an expensive and ineffective proposition. In reality, service providers lack the data they need to pinpoint underperforming areas of the network. They know there is a problem because their subscribers have complained, but adding capacity is more complicated than it used to be. The network is no longer a single funnel that an operator can monitor, increasing bandwidth to allow a greater amount of traffic to flow through it. In practice, there are hordes of such channels in the network -- all of which must be expanded and provisioned efficiently in order to accommodate the growing appetite for data and ensure happy and satisfied customers.

Most Internet service providers (ISPs) have already played with deep packet inspection (DPI) in the past and attempted to reuse the technology to infer health diagnostics across their entire network. The problem is that DPI hardware cannot keep up with accelerating speeds and feeds, and is therefore too expensive to deploy network-wide -- resulting in very spotty visibility. The classic DPI approach dissects every single packet over a port to see what's inside. However, the advancements in encrypted network traffic post-Snowden have made packet inspection not only expensive but also lacking much of the data visibility it had before.

So if the DPI approach of opening up every packet everywhere doesn't scale, how does the industry solve this key problem on the journey to customer satisfaction? How can service providers instantly recognize cloud applications and services -- to see exactly how they flow to and through networks -- so they can understand how those applications are performing at any given place in real time?

Want to know more about video and TV market trends? Check out our dedicated video services content channel here on Light Reading.

The answer is to use a more holistic approach to solving the problem by taking advantage of the information that is already available in an operator's Internet infrastructure. Simply having all of this data in one place, instead of it being collected across multiple different silos, is step one. But the real key is adding a data source that identifies every traffic flow on the global Internet.

Think of having a "caller ID" that identifies every application and service in the cloud without digging into the packet. With this multi-dimensional insight, operators can visualize when an IP flow reaches their network so providers won't need DPI to tell them what application or service it is, how it landed on their peering router or how it traverses their network. This visibility is specifically helpful as certain areas of the network become more strained and operators need data to make educated decisions on where to further build out the network.

Multi-dimensional data can point out, for instance, that user demand for Hulu is growing at a certain CMTS in Fort Lauderdale while all other application demand has remained the same. Knowing that the traffic is Hulu, and understanding the CDNs delivering Hulu allow operators to add additional capacity exactly where it's needed, cost effectively provides higher streaming performance -- satisfying customer demand.

Armed with this data, network operators have a single pane of glass through which they can immediately see all traffic across the network, where it is and its impact, even if it is encrypted -- a level of visibility that's imperative for accurate performance management and quick identification of issues. Operators can use this visibility to identify capacity constraints and identify exactly which traffic is contributing to congestion issues at any point in the network before customers even have a chance to complain. For instance, an operator can now investigate an over-utilized or growing port to uncover exactly which cloud applications and services are contributing to the growth. So, if an interface is handling traffic from Netflix, YouTube and Twitch, yet the next interface is being under-utilized, streaming issues can be resolved immediately for all three cloud services by balancing the traffic between the two and shifting YouTube traffic to an alternate path.

The ability to use modern technology that allows network operators to correlate these disparate types of information is the key to being able to provide insight into the management and growth of their networks -- with performance as the focal point.

— Mike Hollyman, Head of Consulting Engineering, Nokia Deepfield

Phil_Britt 12/26/2017 | 4:58:41 PM
Re: Question But there's significantly more traffic now than before 2015. So the results could be different.
brooks7 12/26/2017 | 4:55:17 PM
Re: Question Well, they weren't a problem before 2015 and since we essentially returned to old rules...should be no problem.


Phil_Britt 12/26/2017 | 4:48:45 PM
Re: Question It will be interesting to see if the new ruling on Net neutrality willl result in even more noticeable slow downs with many services, or if those fears are unfounded.
danielcawrey 12/9/2017 | 11:00:48 PM
Re: Question It's true: Slow is the new down. 

I think Google and Facebook were probably the first to realize this. Now, everyone's got to get to their speed! 
brooks7 12/8/2017 | 1:38:36 PM

So, just so I am clear...this is really talking about trends on the network.  Not about specific stream problems.  So a source or destination type of capacity management?


Sign In