DENVER -- The Big 5G Event -- In planning the transition to the edge, service providers need to first identify which workloads are best suited to making that journey, said Troy Saulnier, who works in strategy, architecture and network transformation for Bell Canada.
"From an edge perspective, the first thing we need to start thinking about as an operator is why do we need to go there," Saulnier said at a session at the Big 5G Event here Monday.
Ultra-low-latency applications, such as virtual reality, augmented reality and other applications requiring sub–5-msec response times, are suited to the edge, Saulnier said. And PON and 5G have hard distance limits that mandate where they're deployed; for example, PON needs to be within 20 kilometers of the end user.
For operators also in the television business, caching is a great use case for the edge, Saulnier said. That kind of caching improves customer experience; for example, when an event like Game of Thrones comes on, the operator can reconfigure the network using software.
Another consideration is optimizing the cost structure for the transport network. Wireless operators who don't own the transport layer are incentivized to get as close to the customer as possible. Bell Canada fortunately does own the transport, but proximity is still an issue, Saulnier said.
Additionally, operators will want to move workloads to the edge to optimize use of assets in the field, Saulnier said.
AT&T has gone through multiple generations of cloud deployments, each with different edge architectures, said Tom Anschutz, AT&T distinguished member of the technical staff. The first generation, the AT&T Integrated Cloud (AIC), was more centralized. "You couldn't afford both the time and capital to make data centers out of every central office on the planet," Anschutz said.
The current generation of AT&T's cloud is the 5G Cloud, which uses a different software architecture than AIC. Where AIC used containers in virtual machines managed by OpenStack, the 5G cloud brings up OpenStack in containers as needed, relying on Airship open source technology for software delivery and lifecycle management.
Bell Canada runs virtual machines on Kubernetes, with no OpenStack whatsoever, Saulnier said. But Kubernetes needs work; orchestration is robust but networking and storage need improvements as applications such as video analytics make heavy storage demands.
Conditions at the edge are different from central data centers. Cell sites have steel, containerized enclosures that are not like data centers at all. Many lack HVAC entirely, with no raised floors and extremely reliable DC power, but not sufficient power to support data center scale compute. Also, cell sites are small, Anschutz said.
Low-density computing makes computing at the edge practical, Anschutz said. Low density doesn't require upgrades to infrastructure and cooling, and compute technology improves annually. "Low density gets the job done," Anschutz said.
Certification is among the challengers moving to edge computing, Anschutz said. That includes both NEBS and AT&T's own, similar internal certifications. Carriers can get around that problem by creating "carve-outs" where certification is not required. A facility might include an enclosure where everything outside that enclosure is NEBS certified and inside uses a different architecture, he said.
Bell Canada makes use of carve-outs, but they go against the principle of reducing infrastructure costs and maximizing spending on compute. "It's a last resort," Saulnier said.
OpenEdge presents a more straightforward solution to the certification problem; it's an open hardware spec that's NEBS compliant, to plug into the central office without carve-outs for power supplies and other needs, Anschutz said.
Another solution to the certification problem is to deploy a sufficient number of servers. When deploying servers in large numbers, operators can forego NEBS and use standard data center infrastructure. "We're going to move from the telco regime to the data center regime," Anschutz said.
Despite obstacles, compute is moving to the edge, Anschutz noted. Evolving 5G and wireline access are becoming virtualized and going to the edge, leading to a virtuous cycle; once you have infrastructure to support communications, you have a business case to deploy additional functions at the edge.
Operators need both openness and community to make the edge work. If only a few operators make the transition, it won't go far. Gaming software operators, for example, need to deploy for lowest common denominator. To encourage low latency, high bandwidth use cases, carriers need to collaborate, Anschutz said.
Infrastructure for edge compute needs to be developed by the carrier community: Operators would be better off collaborating to speed up time to market for technology that won't differentiate individual carriers, Anschutz said.
As long as components are standardized, operators can feel free to put together their own individualized solutions, Saulnier said. "Every operator will put its recipe together differently, but if we're using the same recipe book and ingredients together, that's helpful," he said.
Smaller operators will demand cookie-cutter solutions, which will provide big opportunities for open source and systems integrators, Saulnier said.
- Here Are AT&T's 5 Most Congested Markets
- AT&T Hints at Speed-Based Pricing for 5G
- Bell Canada Makes 5G Progress
— Mitch Wagner Executive Editor, Light Reading