Microsoft's 'Project Brainwave' Details Ambitious AI Plans
Microsoft is developing new ways to deliver artificial intelligence and deep learning technologies to developers as a service through its Azure public cloud.
At the Hot Chips 2017 conference this week, Microsoft offered a detailed look at "Project Brainwave," the company's deep-learning acceleration platform for delivering what it calls "real-time AI"
Project Brainwave is made up of three parts, according to a company blog post:
- A high-performance, distributed system architecture
- A deep-neural network (DNN) engine that is synthesized into specialized field-programmable gate arrays (FPGAs) chips
- Finally, a compiler and runtime for what Microsoft calls "low-friction deployment of trained models."
What does it all mean? Essentially, Microsoft wants to use its own massive infrastructure to support the FPGAs and deliver the platform as a microservice to developers and others who want to create different applications that have a layer of AI or deep learning such as natural language processing.

(Source: Microsoft Research)
As Doug Burger, a distinguished engineer at Microsoft, writes in an August 22 blog post:
Project Brainwave leverages the massive FPGA infrastructure that Microsoft has been deploying over the past few years. By attaching high-performance FPGAs directly to our datacenter network, we can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop. This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.
Ultimately, Microsoft plans to deliver this deep-learning platform through its Azure public cloud, making it a service that developers can tap into as needed. The company is also providing support for different deep-learning frameworks, including Microsoft's own Cognitive Toolkit and Google's Tensorflow.
At the Hot Chips show, researchers showed Brainwave working on an Intel 14 nm Stratix 10 FPGA, which offered a performance of 39.5 Teraflops. (A teraflop is one million, million floating-point operations per second.)
In his blog post, Burger noted that Microsoft plans to improve the performance "over the next few quarters" but he did not give an indication when the service would be available to Azure users. At ZDNet, Mary Jo Foley noted that Redmond has spoken previously about Brainwave in 2016, and that it could reach developers by next year.
Over the past several months, Microsoft has released a steady stream of AI news and the company has revamped part of its sales team to focus on AI, machine learning and deep learning technologies. (See Microsoft Reorg Targets Cloud & AI Sales.)
In July, the company announced it is developing its own AI chip to incorporate the technology into its next-generation HoloLens. (See Microsoft Designing Its Own AI Chip.)
In addition, Microsoft recently opened its own AI research lab with about a 100 researchers working on different projects. (See Microsoft Establishes New AI Research Lab.)
Related posts:- Microsoft, Red Hat Expand Partnership to Include Containers
- Microsoft Introduces 'Event Grid' to Automate Azure Serverless Computing
- Microsoft Buying Cloud Orchestration Expert Cycle Computing
— Scott Ferguson, Editor, Enterprise Cloud News. Follow him on Twitter @sferguson_LR.
