Startup Zededa Targets Cloud-Native Edge
LOS ANGELES -- Open Networking Summit -- Zededa, a startup that emerged from stealth mode in late February, is setting its sites on what it calls the "cloud-native edge," trying to create autonomous cloud platforms at the edge of the network that operate independently of the network or cloud.
Targeting enterprise and industrial applications primarily, Zededa may also explore telecom partnerships, says CEO and Co-Founder Said Ouissal, who was here this week interacting with the open source community. The current Zededa board includes veterans of previous open source projects, including Apache and Hadoop, he says.
After announcing $3.06 million in seed-stage funding from investors -- including Wild West Capital and Almaz Capital -- Zededa is continuing to develop its prototype platform, which is about to go into trials, Ouissal says in an interview.
"The edge cloud is going to be ubiquitous, there is going to be computing everywhere," he says. "Whether it's $6 single-board computer in a drone or attached to sensor or whether it's edge gateways or IoT gateways, or servers or data-center class products, we are going to be running real-time applications. That's nothing new -- we do it all the time now with IoT, but today all that data is sent to the cloud to be processed."
Latency and other issues will require more immediate processing of some data much closer to where it's generated and that sets up platform challenges.
"We have industrial customers that have machine tools which generate 50 megabits per second of raw data," Ouissal notes, "and that has to be processed in the machine tool" to provide the immediate feedback necessary to maintain quality assurance on the parts being built. "That's why we will need a lot more autonomous systems at the edge that can operate even if they are not connected to the network or the cloud."
The edge compute platforms will need some of the same virtualization capabilities used elsewhere, he adds, pointing to the Linux Foundation 's recent announcement of the Acrn project, a flexible, lightweight reference hypervisor based on code contributions from Intel. The embedded hypervisor is aimed at the diversity of IoT workloads.
Embedded virtualization will be key to support two different kinds of operating systems at the edge, Ouissal adds, a real-time operating system and a non real-time operating system. The former comes with a strict scheduling requirement that can't be put on hold, supporting applications like robot arms or connected cars, while the other is for data analytics.
"We are already seeing embedded hypervisors in the automotive world," he notes, where the need to separate the systems that control critical functions from the ones that support infotainment has already been an issue. Fiat Chrysler had to recall 1.4 million cars after hackers used its Uconnect infotainment system to remotely take over control of the cars themselves, requiring software upgrades to separate the two.
Security is also a different kettle of fish at the distant edge because it's not just a matter of cybersecurity but also physical security, Ouissal notes. That's because some of this edge computing will be deployed at places like light posts, where it can be easily accessed.
"The mobile phone industry had to do this and it has a lot of security features that we can build on," to prevent hacking of these distributed computing platforms in the same way that smartphones that are secured can't be easily hacked, he says.
From his perspective, telecom companies are still evaluating how partnerships in this space can go, which leads Ouissal to conclude that the edge compute world will evolve to be natively integrated or become an over-the-top play.
"We know it's not going to be only us, and it's our plan to build relationships," he says. It just gets more interesting with 5G and the rise of edge-to-edge communications, he adds.
— Carol Wilson, Editor-at-Large, Light Reading