Latency is a big reason for application performance issues, but there are several varied factors that could make or break a telco business case for edge computing.

Roz Roseboro, Consulting Analyst, Light Reading

August 18, 2020

5 Min Read
Five questions we need to answer about the edge

What I enjoyed most about being an analyst was that it let me indulge my natural instinct to ask questions. The most important part of my job was to ask questions about everything, of everyone, myself included. While I (unknowingly) asked the standard 5W and H questions rookie journalists do, the one I was always most interested in is, "why should anyone care?" When it comes to the edge, the answers are as multi-faceted as the issue itself.

I got to thinking about this after reading Mike Dano's article about how edge computing might help address performance issues with videoconferencing. A few things jumped out at me. First was that many people assume that latency is the main or sole reason for application performance issues. I won't claim to understand all of the underlying technical issues, but I suspect that Dean Bubley and others are right to say that it is more complicated than just latency. Software architecture, underlying hardware and more can impact the end-user experience. And if you're talking about anything non-local, there are lots more places where latency can be introduced that will be unaffected by the location of the application itself. I was intrigued by the hierarchy of development needs from Cloudflare which states quite clearly that speed is not the only criteria and certainly not the most important one – to developers, at least.

I also noted the assertion that data sovereignty might be the "killer app" for the edge. This idea did come up a few times in my research, but it's not nearly as sexy as driverless cars, so it didn't get much play. Something not so surprising: Vendors are bullish on the edge. Of course they are. It (in theory) dramatically increases their addressable market. But a question: How close is close enough?

IMG_0978

I suppose I shouldn't have been surprised that there are still so many unanswered or unsettled upon questions. After all, "edge" is a location (or should I say, multiple locations), an architecture, a business model, an attribute – with the definition dependent upon who you are and what you're trying to do. And like any emerging technology, there are fundamental questions about the edge that need to be answered:

What are the business drivers?
By this I mean, what will operators, enterprises, consumers be able to do that they couldn't do before? Perhaps more importantly, what will operators, enterprises, consumers be able to do better, more quickly, more cheaply than they could before? Only after these questions are answered will we see deployments of any scale. Right now it seems like a land grab to position for the future since very few services need real-time processing. The $64 million question: Who will pay for those services?

What are the operational implications?
I'm thinking here about management and the need for automation. Why? Locations with no humans present for one. The sheer scale for two. There is no way to manually provision/maintain thousands or millions of endpoints without opex skyrocketing through the roof.

What are the technology knock-on effects?
Of course, moving resources outside of centralized data centers drives new requirements for infrastructure. Well, perhaps not "new" – the "ilities" (Reliability, availability, scalability, security, flexibility, manageability and probably a few others that are escaping me) – are always relevant in a telco environment. It's just that the parameters and priorities change when talking about the edge. And I can't write about infrastructure without talking about "cloud-native," which offers the promise of a single platform for all services, workload portability and elastic scaling to respond to changing demand (again, intelligent automation will be critical).

What is the business model?
Is connectivity to applications enough or is it necessary to provide platforms as well? It's likely different answers for telcos than for hyperscale public cloud providers. Already we're seeing a few things from the telco side. Verizon and Vodafone, among others, are segmenting their offers – a combination of network and edge computing assets – to account for the different requirements of different applications, which makes all the sense in the world. Others are playing middleman between enterprises and edge computing resources. Even though Ericsson shut down Edge Gravity, it still sees an opportunity for service providers, but one now defined by "partnerships with hyperscalers and systems that can link everything together." Ericsson "would facilitate how the telco exposes its network assets and integrates with those hyperscaler partners." DT's startup MobileedgeX is focusing on providing a development environment so a business can "manage their digital operation seamlessly across distributed locations independent of underlying network ownership or systems." I'm sure other combinations, collaborations and partnerships will emerge as the evolution at the edge marches on.

Who will deploy edge infrastructure? And when?
I touched upon this in my last column. Both telcos and hyperscale cloud providers claim the edge. Newer companies like EdgeMicro, EdgeConneX, Cloudflare and VaporIO do too. Can the industry support so many? Possibly. At this early stage though, I think it's better to have too many than too few.

A final thought to consider: More traffic kept locally means lower transport costs. Again, it's not as sexy as uber-hyped VR services, but potentially more significant for a telco's bottom line. And a final, intentionally provocative question: Could it be that the edge will be more important as a cost-savings mechanism than a revenue-generating one?

Roz Roseboro, Consulting Analyst, Light Reading. Roz is a former Heavy Reading analyst who covered the telecom market for nearly 20 years. She's currently a Graduate Teaching Assistant at Northern Michigan University.

About the Author(s)

Roz Roseboro

Consulting Analyst, Light Reading

Roz Roseboro has more than 20 years' experience in market research, marketing and product management. Her research focuses on how innovation and change are impacting the compute, network and storage infrastructure domains within the data centers of telecom operators. She monitors trends such as how open source is impacting the development process for telecom, and how telco data centers are transforming to support SDN, NFV and cloud. Roz joined Heavy Reading following eight years at OSS Observer and Analysys Mason, where she most recently managed its Middle East and Africa regional program, and prior to that, its Infrastructure Solutions and Communications Service Provider programs. She spent five years at RHK, where she ran the Switching and Routing and Business Communication Services programs. Prior to becoming an analyst, she worked at Motorola on IT product development and radio and mobile phone product management.

Roz holds a BA in English from the University of Massachusetts, Amherst, and an MBA in marketing, management, and international business from the J.L. Kellogg Graduate School of Management at Northwestern University. She is based in Chicago. 

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like