& cplSiteName &

Google on the Verge of Mobile AI Devices

Brian Santo
1/28/2016

Google has licensed Movidius's processors and software development technology, with the intention of using those processors in combination with its own neural network technology to imbue mobile devices with artificial intelligence (AI), so that they can "see" and respond to their physical environments.

Movidius CEO Remi El-Ouazzane told Light Reading the deal enables neural networking to be brought to the edge of the network.

The licensing agreement follows a longstanding collaboration between Google (Nasdaq: GOOG) and Movidius.

Movidius has a rare combination of areas of expertise that include algorithms, semiconductor processing and machine vision that detects the visible light spectrum. Relying on these capabilities, the company has developed a line of chip-based vision systems that, by necessity, draw only tens of milliwatts yet are capable of extraordinarily rapid processing. The company brags of being able to sustain rates of thousands of GFLOPs at these very low power levels.

Google incorporated Movidius vision systems in its Tango project (an outgrowth of a neural network project that started at Motorola, and which Google kept). An initial aim of the project was to embed visual processing into mobile devices. Demo projects include robots that use sight to run mazes. This contrasts with Google's self-driving vehicles, for example, which rely mostly on radar and lidar to detect their surroundings.

Google is interested in investing mobile devices with the ability to learn. The hurdle to putting machine learning into mobile devices is that there's not enough power available in edge devices to sustain the extraordinary computation required.

That's where Movidius's processors come in. Google will use Movidius's latest chip, the MA2450, a new iteration of the company's Myriad 2 family of vision processors, to run machine intelligence locally on devices.

Local computation allows for data to stay on device and properly function without Internet connection and with fewer latency issues. This means future products can have the ability to understand images and audio with incredible speed and accuracy, offering a more personal and contextualized computing experience, the two companies explained.


Want to know more about the Internet of things? Check out our dedicated IoT channel here on Light Reading.


"All necessary processing of a trained network can be done on device," El-Ouzzane said. "Beforehand though, it is necessary for offline training to occur to correctly weight and optimize all the layers of the network. This may occur with a mix of both supervised and unsupervised training, and any number of techniques such as back propagation in order to get a network performing as desired (accuracy, optimal size). This is still done on GPU farms before deploying the end applications to users. To be clear though: what we are enabling is full capabilities of image classification/recognition/visual intelligence on the device. " The italics are El-Ouzzane's.

Devices with trained networks on board will be able to respond to their environments without a connection. With a connection, however, the capabilities of the device can be enhanced by offloading processing to cloud-based computational resources.

We are "laying the groundwork for the next decade of how technology will enhance the way people interact with the world," said Blaise Agϋera y Arcas, head of Google's machine intelligence group. "By working with Movidius, we're able to expand this technology beyond the data center and out into the real world, giving people the benefits of machine intelligence on their personal devices."

Google will contribute to Movidius's neural network technology roadmap.

Agϋera y Arcas and El-Ouzzane explain more about their joint project in this video:

— Brian Santo, Senior Editor, Components, T&M, Light Reading

(13)  | 
Comment  | 
Print  | 
Related Stories
Newest First  |  Oldest First  |  Threaded View        ADD A COMMENT
Page 1 / 2   >   >>
DHagar
DHagar
1/28/2016 | 10:30:57 PM
Re: Google AI
MordyK, there you go - good one!
MordyK
MordyK
1/28/2016 | 10:22:23 PM
Re: Google AI
Trump for President :)
DHagar
DHagar
1/28/2016 | 10:14:16 PM
Re: Google AI
MordyK, great thoughts!  Very true, the human being is a marvel - we still haven't tapped into the capacities of the brain.  Excellent perspective that all models we are admiring are mirrors of the human creation.  We will make progress as long as we don't forget that!

And yes, some received fewer brain cells - or at least don't use the ones they were given!
MordyK
MordyK
1/28/2016 | 9:47:21 PM
Re: Google AI
They can definitely surpass us, but I find that the best R&D creates and replicates human capabilities and then takes them to the next evel. Every time I see new research that replicates a human capability, I marvel at the impressiveness of the human being raw capabilities,

Effectvely depending on your beliefs either evolution or god greated the ultimate sensor device in us humans. Although he may have removed some of the intelligence :)
DHagar
DHagar
1/28/2016 | 9:38:57 PM
Re: Google AI
MordyK, it really does open the door to changes we can't even imagine at this time.  The capabilities are no longer limited by our human limits but are unlimited by the technological abilities.  I don't think we have scratched the surface. 

We will be learning for years!
MordyK
MordyK
1/28/2016 | 8:38:25 PM
Re: Google AI
This capability is very excisting and the possibilities endless. It effectively allows you to train it to repllicate your eyes and your brain, which allows you to understand what things are. Food, nutrition, freshness, etc. ae but a few examples of what this tech can achieve. The only real thing that needs to be added for a fully capable device of human level comprehension of the surroundings is the need to replace the sensory element. This includes sound and feel, and these things are being workd on in research labs.
inkstainedwretch
inkstainedwretch
1/28/2016 | 2:44:21 PM
Google AI
Well one of the key elements of this deal with Movidius is the latter's development kit. It would appear that Google wants to have other developers start figuring out potential capabilities and apps. There's more about it on Google's Project Tango page, including information about the devkit. -- Brian Santo

 
DanJones
DanJones
1/28/2016 | 1:49:47 PM
Re: scary
AI as Google sees it, as all machine-learning. They first commercially deployed it for the Google Photos update, which "recognizes" faces etc. and orders them.
inkstainedwretch
inkstainedwretch
1/28/2016 | 1:12:17 PM
Mobile AI
The people involved  seem to be thinking about putting this capability on mobile phones first, so that provides a general boundary for what you can expect the first applications to be. But specifics? I hope to follow up with Google to find out more. The company flatly refused to talk to me for this article. -- Brian Santo
Mitch Wagner
Mitch Wagner
1/28/2016 | 12:47:49 PM
Re: scary
I expect the earliest applications will be security, facial recognition and looking for objects that might be weapons. 

Also, facial recognition and object recognition used for marketing. "Say, Brian," says the neural network, "I notice you're drinking a Starbucks coffee. Wouldn't you be happier if you just throw that out and buy a Peet's instead?"

 
Page 1 / 2   >   >>
Featured Video
Upcoming Live Events
December 3-5, 2019, Vienna, Austria
December 3, 2019, New York, New York
March 16-18, 2020, Embassy Suites, Denver, Colorado
May 18-20, 2020, Irving Convention Center, Dallas, TX
All Upcoming Live Events