Featured Story
Intel and telcos left in virtual RAN limbo by rise of AI RAN
A multitude of general-purpose and specialist silicon options now confronts the world's 5G community, while Intel's future in telecom remains uncertain.
Google has licensed Movidius's processors and software development technology, with the intention of using those processors in combination with its own neural network technology to imbue mobile devices with artificial intelligence (AI), so that they can "see" and respond to their physical environments.
Movidius CEO Remi El-Ouazzane told Light Reading the deal enables neural networking to be brought to the edge of the network.
The licensing agreement follows a longstanding collaboration between Google (Nasdaq: GOOG) and Movidius.
Movidius has a rare combination of areas of expertise that include algorithms, semiconductor processing and machine vision that detects the visible light spectrum. Relying on these capabilities, the company has developed a line of chip-based vision systems that, by necessity, draw only tens of milliwatts yet are capable of extraordinarily rapid processing. The company brags of being able to sustain rates of thousands of GFLOPs at these very low power levels.
Google incorporated Movidius vision systems in its Tango project (an outgrowth of a neural network project that started at Motorola, and which Google kept). An initial aim of the project was to embed visual processing into mobile devices. Demo projects include robots that use sight to run mazes. This contrasts with Google's self-driving vehicles, for example, which rely mostly on radar and lidar to detect their surroundings.
Google is interested in investing mobile devices with the ability to learn. The hurdle to putting machine learning into mobile devices is that there's not enough power available in edge devices to sustain the extraordinary computation required.
That's where Movidius's processors come in. Google will use Movidius's latest chip, the MA2450, a new iteration of the company's Myriad 2 family of vision processors, to run machine intelligence locally on devices.
Local computation allows for data to stay on device and properly function without Internet connection and with fewer latency issues. This means future products can have the ability to understand images and audio with incredible speed and accuracy, offering a more personal and contextualized computing experience, the two companies explained.
Want to know more about the Internet of things? Check out our dedicated IoT channel here on Light Reading.
"All necessary processing of a trained network can be done on device," El-Ouzzane said. "Beforehand though, it is necessary for offline training to occur to correctly weight and optimize all the layers of the network. This may occur with a mix of both supervised and unsupervised training, and any number of techniques such as back propagation in order to get a network performing as desired (accuracy, optimal size). This is still done on GPU farms before deploying the end applications to users. To be clear though: what we are enabling is full capabilities of image classification/recognition/visual intelligence on the device. " The italics are El-Ouzzane's.
Devices with trained networks on board will be able to respond to their environments without a connection. With a connection, however, the capabilities of the device can be enhanced by offloading processing to cloud-based computational resources.
We are "laying the groundwork for the next decade of how technology will enhance the way people interact with the world," said Blaise Agϋera y Arcas, head of Google's machine intelligence group. "By working with Movidius, we're able to expand this technology beyond the data center and out into the real world, giving people the benefits of machine intelligence on their personal devices."
Google will contribute to Movidius's neural network technology roadmap.
Agϋera y Arcas and El-Ouzzane explain more about their joint project in this video:
You May Also Like