Two men from different segments of the edge computing realm have spearheaded a new effort to bring clarity to that often-confusing world through an open source project that began with a glossary of terms and an edge computing "landscape map."
Matt Trifiro, chief marketing officer of Vapor IO , a developer of edge computing infrastructure, and Jacob Smith, senior vice president of engagement at Packet , which makes bare metal servers and other white box hardware, co-chaired the group that produced a report, State of the Edge 2018. The report explains their approach in depth, along with laying out a list of partner companies. The glossary was created, with feedback from many players, as a sort of style guide for the report, Trifiro says.
"Our goal in doing it was to see if we could level-set the conversation and help the ecosystem accelerate because of that," Smith says in an interview. "We're all interested in accelerating the ecosystem but, as it happens when things get very compelling and popular and capture the imagination, there was a lot of stuff that wasn't particularly productive. We wanted to see if we can create a vendor-neutral State of the Edge report that would become somewhat of a gathering spot for people who wanted to contribute to it. "
Vapor IO and Packet contributed the work behind the Open Glossary of Edge Computing to the Linux Foundation, where it will become part of a new open source project that was announced in late June. In addition, the effort that Trifiro and Smith are leading took a first stab at what they call the Edge Computing Landscape.
All of this is intended to be a multivendor, multi-player effort that will accelerate edge computing across the ecosystem by finding points of agreement that today might not be so clearly spelled out, say Trifiro and Smith.
One of the challenges they see is that, while the telecom industry in general agrees on the urgent need for edge computing to support new latency-sensitive applications and more, there are many different players in the equation coming at the solutions from different angles. Those include data center operators and tower companies who came at this from the real estate angle, as well as telecom operators and the vendor community.
There are a number of companies, particularly startups, that realize it is "a very complicated problem" to determine how to support distributed applications with hundreds of microservices that need to be carefully placed for low latency to meet the required service level, Trifiro says, especially when distributing them to the edge constrains the resources available.
"So the question is, how do we make it easy, how do we build a platform for companies and developers to be able to say, 'Okay, here's a workload that I need to run near this location, with this level of latency, and even at this level of cost,'" he comments. One goal is to reach the point where developers can lay out the needs of a specific workload to an orchestrator, which can then determine where it's best for that workload to run, based on latency, cost and other factors.
The State of the Edge Report puts forth four basic principles: first, that the edge is a location, not a thing; secondly, that while there are lots of edges, the edge of the last-mile network is the most important one today; thirdly, that this network edge has two sides, an infrastructure edge which is the operator side and a device edge which is the user side; and finally, that compute will exist on both sides and work in coordination with a centralized cloud.
Some of its conclusions are a bit commonsensical, such as the fact that device edge resources are often constrained by power and connectivity, while the infrastructure edge is potentially dynamically scalable, as the centralized cloud is today but on a smaller scale. In similar fashion, they conclude that there will likely be multi-tiered, hybrid hierarchies of edge and centralized cloud resources.
It's also not a shock to see that Trifiro and Smith see cloud-native technologies as a primary enabler for a robust edge computing ecosystem. They also point to accelerators, such as Smart NICs (network interface cards) and GPUs (graphics processing units), as key to efficiency at the edge as part of a distributed architecture where power constraints are likely.
"That's especially accentuated as you see the kinds of workloads that are attracted to the edge because they're inherently large, inherently performance-oriented, and inherently expensive to put everywhere," Smith says. "So how do you lower the cost, improve the performance and take advantage of a place where there's continuing fast pace of innovation. And that tends to be in specialty hardware."
Since it's impossible to put racks of servers at a 5G cell tower to handle the enormous amounts of data that need to be "packed, billed, routed, processed and done basically in real time," he adds, the answer is going to be a specialized FPGA, NIC or GPU to handle that throughput at a reasonable cost.
The report also digs into the variety of potential business models. Its authors include: Jim Davis, principal analyst, Edge Research Group; Philbert Shih, managing director, Structure Research; and Alex Marcham of Network Architecture 2020. The Cloud Native Computing Foundation is also a contributor and the sponsors are ARM Ltd. , Ericsson AB (Nasdaq: ERIC) UDN and Rafay Systems, in addition to VaporIO and Packet.
The report is readily available and the glossary is now part of the Linux Foundation open source project. Trifiro is putting together a technical steering committee for the project that he says is going to be "super diverse."
"We're going to try to make this a standard that is adopted across many, many organizations and projects," he adds. "I don't know if anybody's done an open sourced, collaboratively built glossary before. So it's a neat experiment but so far it seems to be getting a lot of traction."
— Carol Wilson, Editor-at-Large, Light Reading