How cable views the edge

In this second segment of a four-part series, we look at more key results from a new Heavy Reading study about the cable industry's edge computing views and approach.

Alan Breznick, Cable/Video Practice Leader, Light Reading

November 19, 2021

5 Min Read
How cable views the edge

As noted in the first part of this series earlier this week, few things are hotter in today's tech world than edge computing. Companies of all types are seeking to bring powerful computing functionality as close as possible to users at the edge of their service delivery networks. That's not surprising because edge computing offers huge potential to transform the entire underlying structure of the Internet—from massive, centralized data centers to a much more distributed storage and computing ecosystem.

Edge computing aims to achieve all this by placing the huge processing power of computers and the Internet right where the decisions need to be made in real time or near-real time. The technology aims to bring intelligence all the way to the devices at the network edge instantly, rather than spending precious milliseconds on round trips to the cloud or a data center.

As a result, cable operators, tech vendors and CableLabs are all exploring edge compute's potential as they seek to develop and deliver next-gen, low latency connectivity services such as augmented reality/virtual reality (AR/VR), cloud gaming, holographic video, light-field displays, smart homes, 5G URLLC applications, autonomous vehicles, healthcare sensors, surveillance, smart cities and facial recognition.

To learn more, Heavy Reading teamed up with four leading tech vendors to conduct a comprehensive study of cable operators' attitudes and efforts surrounding edge computing. In this series of sponsored blog posts, we present the results of that study, discuss the implications and draw conclusions about cable's edge computing efforts.

Ranking investments

One key survey question focused on how operators ranked their company's technology/resource investments in order of priority. The results indicated a close split between the two top priorities—programmable network infrastructure and network automation.

As shown below, network automation ranked as the highest priority among the largest group of survey participants, with more than one third (36%) rating it so. But programmable network infrastructure captured the most first-place and second-place votes (65%) combined, giving it a slightly higher overall score. Thus, both appear to be high priorities for network operators, easily outdistancing the other two choices.

These results imply that cable operators are seeking nimble operating systems they can adjust easily and quickly to support new functions. Providers want the flexibility to add, subtract or modify third-party software to bolster their edge compute efforts.

Figure 1: Ranking investments

Greatest investment potential

Given these priority rankings, we asked participants which area offers the greatest potential for enabling the move to edge compute. As depicted below, network infrastructure emerged as the leading choice here, earning nearly one half (46%) of the respondents' votes. More than one third (35%) selected automation of network tasks, making that choice a strong second. Neither of the other two choices came close to matching the top two.

These results point to the importance of robust system orchestration for edge compute deployments. Operators will need to orchestrate their network functions much more effectively and efficiently to make the most of the technology's potential.

What's also clear is that the move to edge compute depends upon a programmable infrastructure. A programmable network infrastructure is a departure from the way networks were built in the past, where the value of data was in the path or route it took, resiliency, etc. In edge compute, the value of the network is in the data and the ease of applying software to understand and monetize it. This calls for building infrastructure that's more nimble, open and secure.

The implications go all the way to how network operating systems are built: Are they bulky, monolithic and closed or are they lean, containerized and open? Can they attach VNF's leveraging data along the software stack? How do they report telemetry to a multi-vendor outside world? How do they behave under the guidance of orchestrators?

Figure 2: Greatest investment potential

Preferred hyperscaler

As operators carry out their multi-access edge computing (MEC) strategies, which of the major hyperscalers would they prefer to work with? That was another key question in the survey.

Not too surprisingly, as shown in the chart below, Amazon Web Services (AWS) emerged as the top choice, attracting votes from about one third (34%) of participants. Microsoft Azure came in a strong second, garnering support from 28% of respondents. Google Cloud captured third place, registering votes from 18% of respondents, while 16% said their company would prefer to leverage its own infrastructure.

These results jibe with market share figures published by various research houses. In those rankings, AWS generally stands out as the leading hyperscaler, followed by Microsoft Azure and Google Cloud.

These findings also indicate a strong desire among operators for pre-set, pre-baked functions from the hyperscalers to support edge compute services. That's a big part of why the major hyperscalers enjoy such a strong appeal in the first place.

Further, these findings confirm that operators and enterprises already have a relationship with two or more of these hyperscalers and other cloud services. This will undoubtedly continue to be the case as operators move some workloads to the edge.

Ultimately, the challenge will be to create elegant hand-offs in, out and in between workloads of multiple environments. For this, providers will need a robust multi-cloud solution that includes virtualization, management and an adaptive network to abstract the need between cloud services and create the right form of cloud on- and off-ramps, to keep systems flowing as needed.

Figure 3: Preferred hyperscaler

We will present more key results from the study in the next two posts. For a free copy of the Heavy Reading white paper detailing all the study results, please click here: to register.

This blog is sponsored by Ciena.

— Alan Breznick, Cable/Video Practice Leader, Light Reading

About the Author(s)

Alan Breznick

Cable/Video Practice Leader, Light Reading

Alan Breznick is a business editor and research analyst who has tracked the cable, broadband and video markets like an over-bred bloodhound for more than 20 years.

As a senior analyst at Light Reading's research arm, Heavy Reading, for six years, Alan authored numerous reports, columns, white papers and case studies, moderated dozens of webinars, and organized and hosted more than 15 -- count 'em --regional conferences on cable, broadband and IPTV technology topics. And all this while maintaining a summer job as an ostrich wrangler.

Before that, he was the founding editor of Light Reading Cable, transforming a monthly newsletter into a daily website. Prior to joining Light Reading, Alan was a broadband analyst for Kinetic Strategies and a contributing analyst for One Touch Intelligence.

He is based in the Toronto area, though is New York born and bred. Just ask, and he will take you on a power-walking tour of Manhattan, pointing out the tourist hotspots and the places that make up his personal timeline: The bench where he smoked his first pipe; the alley where he won his first fist fight. That kind of thing.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like