Featured Story
Huawei 5G products not hurt by US sanctions – sources
Measures against China's biggest network equipment vendor have not had a noticeable impact on the quality of its products, Light Reading has learned.
Multiple open source initiatives are looking at what it takes to put clouds at the network edge.
March 28, 2018
LOS ANGELES -- Open Networking Summit -- The important role that open source will play in distributing compute power to the edge is coming into clearer focus here this week, with multiple initiatives and some significant contributions from major industry players.
The Open Networking Foundation kicked things off with its announcement of a strategic shift that will put major operators in charge of developing reference designs for edge SDN platforms for network operators, with the intent of moving open source technologies forward faster on that front. The Linux Foundation Tuesday announced broader support for its Akraino Edge Stack open source community, including 13 new members and a major open source contribution from one of those Intel Corp. (Nasdaq: INTC). (See ONF Operators Take Charge of Edge SDN and ONF Operators Take Charge of Edge SDN.)
In the Intel keynote Tuesday afternoon, Melissa Evers-Hood, senior director of cloud and edge software for Intel's Open Source Technology Center, explained Intel's decision to open source its Wind River Titanium Cloud portfolio of technologies as well as Intel's Network Edge Virtualization Software Development Kit. Wind River Titanium Cloud is Intel's NFV Infrastructure, based on OpenStack.
Figure 1: Intel's Melissa Evers-Hood delivering Tuesday ONS keynote. (Source: Linux Foundation)
"We are doing this with the explicit intent of accelerating edge technology and innovation," Evers-Hood said. "Wind River Titanium Cloud is in production today with deployments of IoT and industrial edge deployments, all the way up to carrier-grade implementations of edge technology. This is intended to accelerate the ecosystem with regard to providing assets that are low latency, provide high availability, the ability to update in the field, etc."
Intel is also open sourcing its network edge virtualization SDK, which Evers-Hood described as "a set of libraries and APIs which will enable developers to develop applications for edge use cases without having to understand the complexities of various network protocols."
Intel is contributing these under the Akraino project because "we feel there needs to be one edge stack," she commented. "There are lots of projects dabbling with trying to create an edge stack that is hardened and reliable and ready for production. We are announcing with this host of amazing partners that we are standing up and joining Akraino to make this project THE project for the edge."
The other new members of Akraino are Altiostar, China Electronics Standardization Institute (CESI), China Mobile, China Telecom, China Unicom, Docker, Huawei, iFlyTek, New H3C Group, Tencent, ZTE and 99Cloud.
Underlying the edge computing push is an evolving form of OpenStack , and the OpenStack Foundation itself is also focusing on edge-specific deployments, says Jonathan Bryce, executive director. Its OSF Edge Computing Group recently released a white paper, Cloud Edge Computing: Beyond the Data Center, which looks at the specific requirements of edge computing and how it differs and will have different requirements for OpenStack.
"It's not the edge -- it is important to realize that there is no one 'edge,'" Bryce comments in an interview here. "There are lots of edges - places where it makes sense to have computing infrastructure. We are not going to get into the IoT operating systems but we will be engaged with what those devices are connecting to."
Light Reading is bringing together all of the key players in the automation revolution for the first time at Automation Everywhere on April 4 in Dallas. Join us as we tackle the business and technology challenges behind driving network automation. The event is free for communications service providers -- register today!
Edge environments lack the pristine nature of a data center, and that creates constraints about the reliability and security that is required, he notes. "You need to have certain isolation, you need to have recovery modes that don't require humans and other things that get into the operations. You also have constraints about the capacity and the actual resource availability there."
To meet those new requirements, AT&T Inc. (NYSE: T), SK Telecom (Nasdaq: SKM) and SAP AG (NYSE/Frankfurt: SAP) have been working on a deployment project for OpenStack services, specifically oriented around this, called OpenStack Helm. It uses the Helm packaging system, which is a way to describe a complex Kubernetes application, and control the behavior and interaction of containers, Bryce explains. "It's sort of like an orchestration template, specific to Kubernetes."
That enables a small footprint OpenStack deployment, with Kubernetes controlling the restarting and upgrading in an automated way, to deliver the "zero-touch, hands-off, no human required operations model," he explains. "Containerizing it shrinks the footprint and Kubernetes is built to run complex applications and you leverage its strengths to run the OpenStack services."
The project just had its first full release as an OpenStack project in February and AT&T has been trialing it, Bryce says, with those results being funneled into the edge computing working group.
Ultimately, network operators are going to need to manage many different edge deployments and handle that complexity, he says.
"That's the other issue with edge, you don't have one big environment, you have lots and lots of environments," Bryce says. "So automation and zero-touch management becomes really critical. You are talking about hundreds or thousands of small clouds and how do you manage security and configuration and capacity across that size of environment. There are tools that do pieces of it but there are still gaps in that so we are starting up some projects to fill in some of those gaps. That is going through this working group."
— Carol Wilson, Editor-at-Large, Light Reading
You May Also Like