The Edge

All Major Tower Companies Are Sniffing Around Edge Computing

The evolution of the edge
As Packet's Tarazi explained, edge computing has already caught the attention of the company's existing enterprise customers, which he said are looking at edge computing deployments in part as a way to avoid the need to purchase expensive networking switches and other equipment to transmit corporate data to a distant data center.

The enterprise "has to be connected to the big cloud, but it makes no sense for anyone to pay for all the massive internet connectivity to take each [bit of data] across the country," he said. "It's not possible, for the volume of data and the speeds they need. Performance and cost demands that you anchor this thing to some kind of local compute."

Tarazi said enterprises are also evaluating edge computing options in cases where their existing enterprise servers are reaching the end of their life.

But in wireless, edge computing interest is centering on the intersection of the cloud and the low latency provided by 5G, he said.

"5G is not some general service that people buy," Tarazi said, explaining that he believes many 5G deployments will be localized affairs coupled with an element of local computing. "We're starting to talk to a lot of people who want to do their own 5G." And that's one reason why edge computing is sure to be a major talking point at the upcoming MWC show in Barcelona

One example of this kind of deployment is at Chicago-based Rush University Medical Center, which recently announced a teaming with AT&T Inc. (NYSE: T) to deploy a 5G network there. The network will be used in part to more quickly and easily network the hospital's various devices, according to a hospital spokesperson. It will also include an edge computing element so that data requests from doctors and other workers in the hospital don't have to be transmitted to a distant data center where the data is stored.

Tarazi said that's exactly the scenario Packet is hoping to foster with Sprint, albeit in a 4G service. Sprint announced its Curiosity IoT platform late last year. The platform essentially allows businesses to manage their own IoT deployments by installing Sprint's virtualized core and Curiosity IoT platform onto a nearby data center run by Packet. Sprint explained the setup can reduce the distance between the device generating the data and the IoT application processing that data from 1,000 miles to roughly 50 miles.

"Sprint's unique IoT platform design called for a distributed core to bring the network to the data, rather than the data to the network," Zachary Smith, CEO at Packet, explained in a release announcing the news.

"This is the biggest edge compute project out there," Packet's Tarazi claimed.

Tarazi said Packet is seeing demand for similar solutions from other enterprise venues, from malls to stadiums. He declined to disclose the names of the companies Packet is working with, but said there is interest in local computing services that could, for example, analyze streams from video monitors obtained through a local wireless network. "We’re in the installation phase now," he said of Packet's initial batch of edge computing deployments.

Complexity at the edge
So where exactly does Packet fit inside the edge computing ecosystem? Tarazi explained that other companies like Vapor IO are building the miniature data centers -- providing the necessary fiber, power and cooling -- that will shelter the computing functions. Packet, meanwhile, is providing the computing, storage and networking functions inside that micro data center that enterprises and others might want to purchase. Specifically, Tarazi said Packet can supply the application programming interfaces, provisioning and connections to cloud services like Amazon Web Services Inc. that enterprise customers might need.

However, Tarazi acknowledged that this is a space fraught with complexity. For example, just this week The Linux Foundation announced the launch of LF Edge, an umbrella organization that combines the Akraino Edge Stack, EdgeX Foundry, and Open Glossary of Edge Computing into one "open, interoperable framework for edge computing independent of hardware, silicon, cloud, or operating system." Packet is one of a wide variety of players participating in the effort, alongside the likes of AT&T, Ericsson AB (Nasdaq: ERIC), Qualcomm Inc. (Nasdaq: QCOM) and the Automotive Edge Computing Consortium.

The reason there's such a jumble of standards and platforms in the edge computing landscape is because the big data centers built by the likes of Google and Amazon aren't designed for the kinds of quick, simple installations that will be necessary in a wide-scale embrace of edge computing.

"You need a model where someone else can install servers for you, because operationally it's not the same model where you have people in a data center, who can handle the servers and the install. These things would be located in a mall or someplace where you have no tech people," Tarazi explained.

Packet, American Tower and Sprint are not alone on the edge computing frontier, of course. AT&T, for example, has made plain its edge computing ambitions, first through a test last year of edge computing for VR gaming and more recently via a plan to test edge computing in enterprise scenarios this year. And Verizon Communications Inc. (NYSE: VZ), Deutsche Telekom AG (NYSE: DT) and other operators have made similar investments into the edge computing opportunity. (See AT&T Expands Edge Computing Testing to Enterprise Use Cases.)

Now, though, it remains to be seen whether tower companies, operators, startups or others in the value chain will be able to extract value from edge computing demand. And whether there's enough demand for edge computing to warrant that investment in the first place.

Mike Dano, Editorial Director, 5G & Mobile Strategies, Light Reading | @mikeddano

Previous Page
2 of 2
mhammett 2/4/2019 | 2:50:38 PM
Farce "Edge computing" regarding mobile is a farce any closer than the LTE core.
bullschuck 2/5/2019 | 11:54:54 AM
Re: Farce I mean, some places it might not make sense to have compute at the cell site. But what about a Disney park? Host their corp email server. Host a ton of guest services interaction. AR/VR/Interactive stuff. Heck, you could charge rent to travel sites that wanted to be hosted on your system. So on one end is the cell site in my suburb, and on the other side is a Disney park. There's got to be a line there somewhere in between where edge computing makes sense.
mhammett 2/5/2019 | 7:47:41 PM
Re: Farce Various technologies such as CoMP send data from multiple towers. That negates servers at the towers.

Also, the latency of going across a metro area is negligible for anyone but HFT.
wayne_du 2/6/2019 | 2:21:42 AM
you must deeply understand service first today,

even the infrastructure dept. and service dept. are in same company

there are still some misunderstanding and requirement un-match or deplayed


if they are from different comany


with the fast changing technologies in service

how can the tower people can follow their requirements?

that will be  a  big chanllenge!


bullschuck 2/6/2019 | 10:59:32 AM
Re: Farce I disagree with your point about latency being unimportant. AR/VR, A2X, all these are going to need < 35 ms latency, maybe < 10 ms. I don't see that being done going all the way back to the LTE core. But maybe you and I might be defining LTE Core differently. I'm thinking MTSO level. You might be thinking CRAN hub level.

And like CoMP, I don't see folks putting that at the LTE Core level either. You didn't say as much, but maybe you're seeing it go to the CRAN hub level as well, which is the same place I see most edge computing.
mhammett 2/6/2019 | 11:09:39 AM
Re: Farce Chicago to Columbus, OH is 10 ms. I'm talking within a given metro. 10 ms should be no problem.

Anything within a couple hundred miles is going to be under 5 ms and thus make little to no difference.
brooks7 2/6/2019 | 2:12:20 PM
Re: Farce mhammet,

That may not be true.  The speed of light for propgation is about 1msec roundtrip per 100 miles (and that is light in a vacuum).  If there are transceivers (for example a router) in the way than the propogation delay will be considerably longer as there is a general use of store and forward packet systems.  That does not take into account any coding delays in any transceivers.

Now, I don't think that any of this is a problem as I think that applications will need to deal with delays longer than that in any case.


Duh! 2/6/2019 | 10:00:23 PM
Re: Farce Seven,

Can't help geeking out here. Speed of light in fiber is (C0 * Group Index of Fiber), which is 0.68. So prop delay = 1/Cfiber =  4.9 μs/km. (sorry to switch units on you). At 100+ Gbit/s in the metro and core, prop delay is the dominant factor in total latency. It's fair to rough estimate 100 km (not 100 mi) = 1 ms round-trip latency.

There are industrial applications that won't work with that much RTT. 5G transport and a couple of electrical grid applications come to mind.
Sign In