As part of Linux Foundation's new Deep Learning Foundation, it's casting a broader net and has greater ambitions as an AI platform.

September 10, 2018

8 Min Read
Acumos Much More Than a Telcom-Focused AI Project

The new Deep Learning Foundation (DLF) open source project was initially spearheaded by AT&T, when it teamed with Tech Mahindra to launch Acumos, which is now one of three projects within the DLF. But this latest project, unlike other open source efforts that AT&T has kicked off, is not primarily telecom-focused.

Some of the initial work this open source effort within the Linux Foundation is doing will be specifically focused on other verticals such as oil and gas, green energy and health care. This should help unite a fragmented efforts around artificial intelligence and lower the barriers to entry for those wanting to engage in AI, says Mazin Gilbert, vice president of Advanced Technology and Systems at AT&T Inc. (NYSE: T) and governing board chair of the Linux Foundation 's umbrella organization devoted to AI, the LF Deep Learning Foundation.

"The reason we created this umbrella group, even beyond Acumos, was we wanted to create an open source collaborative community for AI that does not exist today," Gilbert says in an interview. "We wanted this collaborative open source community to start looking at AI broadly across verticals as opposed to each company looking at AI from their perspective for their own business only."

Acumos, the first project of the Deep Learning Foundation, has ties to other networking-oriented projects, such as ONAP, but it was deliberately not placed under the LF Networking umbrella, Gilbert explains. There are now two new incubation projects that are part of DLF -- the Angel Project and the EDL Project -- that were added in late August.

Figure 1:

Some of the basic AI technologies have existed for some time, but only with the rise of computing and networking as commodity items at scale has AI become a potential tool for a wide range of uses, he notes. But the potential of AI is still hampered by fragmentation of its development, Gilbert says, and that's a key issue the Deep Learning Foundation intends to address.

"AI is looked at as a great thing to do -- pretty much every company wants to get into AI, but most don't know where to start, mostly because they don't have the expertise," he comments. "On top of that, what do they do with AI? They don't know where to start, what problems they need to solve. What we want to do, as part of this foundation, is lower that barrier of entry and bring everyone into par because it has impact on every industry and every vertical."

Building a platform
The DLF is working to create an underlying platform that allows federation across multiple AI solutions, including multiple existing AI tools to "make AI models easily accessible, drive re-usability in AI," as other Linux Foundation projects have done, he says. It is also creating a distributed marketplace, that will allow solutions and capabilities for different verticals to run on the platform, Gilbert adds.

"We are building ubiquitous foundation but to specialize foundation for different industries -- that is where the marketplace comes in," he says. "We are building solutions and capabilities for different verticals. AT&T and a bunch of other operators, we are interested in telecom sector, so how does the marketplace for AI fit into the LFN and into ONAP and into 5G, that is where we are coming in. But other companies who joined are interested in different aspects of AI, more to do with green energy, more to do with engineering and infrastructure and healthcare, so that the specialization happens at the solution level not at the foundation level."

At the foundational level, Acumos is trying to solve three basic problems, Gilbert says. One is to enable federation across the existing set of AI tools that are already out there.

"AT&T has a tool we have put into open source called Rcloud, we have Google who has a tool called TensorFlow, Baidu has a tool called PaddlePaddle, there are tools like Café and on and on," he says. "There are probably like ten to 20 of these major tools. We use them all. Other companies are no different from us -- the first problem we wanted to address was, how do we make sure those tools connect and federate?

Deep dive into real-world issues and virtualization deployment challenges with industry leaders. Join Light Reading at the NFV & Carrier SDN event in Denver, September 24-26. Register now for this exclusive opportunity to learn from and network with industry experts – communications service providers get in free!

That is primarily being done by building adapters to the common tools, since not all tool creators are members of the group, nor do they want to adapt or change the tools themselves since they are already in use by customers, Gilbert says.

AI app store, selling Lego
The second problem is platform reusability, he notes, in order to enable AI application development to scale.

"Our platform needs to have the ability to create the equivalent of an app store," he says. Today's machine learning and AI models are still developed today by experts, often highly skilled PhDs with a lot of experience who understand data. "Typically, that large army of people is used to build one application. Just think of how many engineers are needed to build any of the virtual assistant applications in the industry today and it's hundreds if not thousands."

What Acumos wants to do is enable the industry to go from tens of machine-learning deployments to thousands, and to do that, it will create a distributed marketplace that functions like an app store.

"So before I do anything, I go to the marketplace, and there is an open marketplace and a private one, in fact, each company can have several private ones," Gilbert says. "All of marketplaces are connected. I can look and see there is something already built that I can buy or might be free or something I could just take and just use on top, as opposed to building from scratch, which could take weeks and months with an army of experts."

That kind of a marketplace would jumpstart AI's usage but he admits its creation is not easy because, in addition to being a repository, the marketplace needs to be able to be distributed and connected.

"There could be 1,000 marketplaces, and they all have to connect with private and public marketplaces, that is one challenge," Gilbert says. "Information on that marketplace has to be searchable, browsable, viewable and reviewed like you'd see in [the smartphone] app store. But most important, number three, that marketplace should allow you to design your machine-learning solution. The marketplace needs to have a bunch of things like Lego, built by different people, different tools, and the marketplace allows you to design your solution based on bringing those Lego bricks together."

Creating common language
The third problem the platform needs to address is bringing together two diverse communities: today's AI experts with their deep understanding of data and how to build models and the developer community, that builds applications, services and operational capabilities.

"You have a divide, there are the data scientists and the machine learners and all that stuff and then you have the other software industry moving towards CI-CD [continuous integration-continuous development] moving toward automation, and things like microservices, containers, Kubernetes, etc.," he says. "So we are trying to [create] that platform that brings those two worlds together so that what a data scientist builds, in minutes it needs to be containerized as a microservice that can run on a private or public cloud platform."

To date, Gilbert says, "no one has solved those problems and this is part of why, it's not an A&T challenge, it's a global challenge and that is why we opened this up to the industry."

There will be telco-specific aspects to the work that Acumos addresses but those will be found in the marketplace, not the foundation, he adds. Like other LF projects, Acumos will have software releases every six months, starting with Athena expected in November.

The Angel Project was contributed by LF DLF member Tencent Inc. , and is "a high-performance distributed machine learning platform based on Parameter Server, running on YARN and Apache Spark," according to the announcement, targeting big data and complex, high-scale models. The EDL Project, which stands for Elastic Deep Learning, is aimed at creating a framework for cloud service providers to build cluster cloud services using deep learning frameworks such as PaddlePaddle and TensorFlow, and was contributed by Baidu Inc. (Nasdaq: BIDU).

This story is the second of two parts based on an interview with Mazin Gilbert. To learn more about AT&T's specific plans in the AI arena, see AT&T's Gilbert: AI Critical to 5G Infrastructure.)

— Carol Wilson, Editor-at-Large, Light Reading

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like