Network operators are "uniquely positioned" to convert their central offices and cell towers into distributed computing facilities to support a wide range of third-party applications including self-driving cars, industrial robotics and augmented/virtual reality, a top AT&T executive is saying. And this could be the move that gets telecom operators back into the cloud computing game big time.
Andre Fuetsch, president of AT&T Labs and chief technology officer, isn't saying every one of AT&T Inc. (NYSE: T)'s 65,000 cell towers and roughly 5000 COs will become edge computing sites. But that footprint, unique to network operators, represents an advantage in creating distributed cloud computing for high-intensity applications like AR/VR and self-driving cars, that will require much more compute power than devices will be able to deliver.
And as Fuetsch also notes, AT&T's transition to 5G includes a new radio access network architecture "where we continue to disaggregate and open up interfaces -- I think there is more and more opportunity to basically take advantage of multi-use resources to do many other things other than just provide a network service."
Deploying white box compute power in a distributed cloud and making that available to third parties for their applications could be a re-set for telecom operators, whose walled garden approach to mobile apps more than a decade ago left the door open for Apple and others to capture the commercial value in an app store approach. It would also be a move that web-scale cloud companies such as Amazon Web Services and Google can't easily duplicate.
"This is a common theme I have about making the network more relevant -- giving this optionality to developers so they have another place to build solutions," Fuetsch comments. "I think this is really powerful and I believe we are uniquely positioned to offer this."
As AT&T pointed out in press release last week, the next generation of apps including self-driving cars and AR/VR will require massive compute power, exceeding what even today's devices can deliver. But they also will be latency sensitive, making connections to distant data centers unfeasible and setting up the need for distributed cloud computing sites. (See For AT&T Mobile 'Edge,' Clouds Connect the Car & More to 5G and AT&T Promises Edge Computing Push.)
"Developers really only have two options, they can develop on the device or in the cloud, which typically means in the data center, hundreds if not 1000-plus miles away," Fuetsch says. "So if they want to do anything from a real-time standpoint they are basically constrained to what they have locally. For these high-intensity applications like VR and AR, you are dealing with form factors -- you don't want to strap on a backpack to have a VR experience."
Having distributed compute power at a nearby CO or cell tower is a much better option, he notes. And providing that data center capability in those locations is already happening: AT&T and other mobile operators are building compute power into the next-generation wireless architecture to support 5G, and wireline operators are looking to initiatives such as CORD (Central Office Redesigned as a Data Center) to convert their existing COs as well.
"Since we are going to need this anyway for 5G, and I think 5G is the vehicle to make this happen, it is only natural to say, could we make available some of these cycles for some of these interesting applications or even incrementally build out more capacity to serve some of this new demand," Fuetsch says. "I think it totally changes the game."
AT&T has already issued a technical survey letter to vendors for Domain 2 multi-access node, that will be access media agnostic as part of its shift to software-defined access. If it can support any wireline or wireless technology from the same access point, there are additional opportunities to spread compute power cost-effectively to where it needs to be, as part of that process, Fuetsch says.
"You can't build big honking data centers at the bottom of every cell tower, it is not going to be cost-efficient," he comments. Based on vendor feedback that AT&T is already getting, however, it plans to "create the requirements we need that ultimately we could build into a white box scenario," or use an existing vendor solution if one exists, he says.
All of this is not without its challenges. The business model, for one thing, is still a work in progress, Fuetsch admits. Technically, the two biggest challenges will be getting the architecture right and selecting the right locations and use cases to get started.
"We need to get the architecture right so it can be exposed in a usable, safe, secure way so that it doesn't compromise the base network functions that we need," he says. "I don't think it is a Herculean challenge, it is a big one because this is relatively new. We haven't really exposed this before."
Some of the use cases seem obvious, like self-driving cars, which are estimated to generate multiple terabytes of data per hour from cameras and sensors. But that may not mean that every road will require distributed computing power, Fuetsch notes. That might be needed first in dense metro areas and only later in wide-open spaces.
"Those two are probably the biggest challenges now as we work this out," Fuetsch says. "We are really excited about it. There are a lot of other interesting enterprise use cases as well. We are also taking a look at that to see where some good starting points are to really get this going and have it take off."
— Carol Wilson, Editor-at-Large, Light Reading