Service Provider Cloud

Microsoft's 'Project Brainwave' Details Ambitious AI Plans

Microsoft is developing new ways to deliver artificial intelligence and deep learning technologies to developers as a service through its Azure public cloud.

At the Hot Chips 2017 conference this week, Microsoft offered a detailed look at "Project Brainwave," the company's deep-learning acceleration platform for delivering what it calls "real-time AI"

Project Brainwave is made up of three parts, according to a company blog post:

  • A high-performance, distributed system architecture
  • A deep-neural network (DNN) engine that is synthesized into specialized field-programmable gate arrays (FPGAs) chips
  • Finally, a compiler and runtime for what Microsoft calls "low-friction deployment of trained models."

What does it all mean? Essentially, Microsoft wants to use its own massive infrastructure to support the FPGAs and deliver the platform as a microservice to developers and others who want to create different applications that have a layer of AI or deep learning such as natural language processing.

A look at how Brainwave works
(Source: Microsoft Research)
A look at how Brainwave works
(Source: Microsoft Research)

As Doug Burger, a distinguished engineer at Microsoft, writes in an August 22 blog post:

Project Brainwave leverages the massive FPGA infrastructure that Microsoft has been deploying over the past few years. By attaching high-performance FPGAs directly to our datacenter network, we can serve DNNs as hardware microservices, where a DNN can be mapped to a pool of remote FPGAs and called by a server with no software in the loop. This system architecture both reduces latency, since the CPU does not need to process incoming requests, and allows very high throughput, with the FPGA processing requests as fast as the network can stream them.

Ultimately, Microsoft plans to deliver this deep-learning platform through its Azure public cloud, making it a service that developers can tap into as needed. The company is also providing support for different deep-learning frameworks, including Microsoft's own Cognitive Toolkit and Google's Tensorflow.

At the Hot Chips show, researchers showed Brainwave working on an Intel 14 nm Stratix 10 FPGA, which offered a performance of 39.5 Teraflops. (A teraflop is one million, million floating-point operations per second.)

Keep up with the latest enterprise cloud news and insights. Sign up for the weekly Enterprise Cloud News newsletter.

In his blog post, Burger noted that Microsoft plans to improve the performance "over the next few quarters" but he did not give an indication when the service would be available to Azure users. At ZDNet, Mary Jo Foley noted that Redmond has spoken previously about Brainwave in 2016, and that it could reach developers by next year.

Over the past several months, Microsoft has released a steady stream of AI news and the company has revamped part of its sales team to focus on AI, machine learning and deep learning technologies. (See Microsoft Reorg Targets Cloud & AI Sales.)

In July, the company announced it is developing its own AI chip to incorporate the technology into its next-generation HoloLens. (See Microsoft Designing Its Own AI Chip.)

In addition, Microsoft recently opened its own AI research lab with about a 100 researchers working on different projects. (See Microsoft Establishes New AI Research Lab.)

Related posts:

— Scott Ferguson, Editor, Enterprise Cloud News. Follow him on Twitter @sferguson_LR.

[email protected] 8/28/2017 | 2:52:23 PM
Re: AI Joe, that can be expected with any hot new technology remember how hard it was twenty years ago to find java programmers? The same is true of security professionals as the market evolves we will see more and more people opting to reboot their careers to where the money and new people getting trained. I think as AI matures we will see more talent mature with it.
[email protected] 8/28/2017 | 2:48:54 PM
Re: AI @Ariella I think it will eventually get standardized but in the initial stages, developers will need to experiment to get it right. The implications are significant if they don't standardize long term because the integration will not happen and it will make AI disjointed and clunky. I fully expect there to be some standardization so that apps will play nicely together and create greater functionality for the user.
Joe Stanganelli 8/23/2017 | 9:16:07 PM
Re: AI @maryam: At the same time, many of them are hindered. A vast plurality (if not downright majority) of AI talent was gobbled up by tech giants working on self-driving cars. Tech companies of all sizes (along with the public sector!) have been fighting over the remains.
Joe Stanganelli 8/23/2017 | 9:14:43 PM
MSFT AI We've known for quite some time, really, about Microsoft's huge AI ambitions. They just got taken more private after the very public Tay disaster. It's good to see some publicity, though, of what they're up to here -- even if they aren't going to be experimenting with social-media AI again anytime soon. ;)
Ariella 8/23/2017 | 4:02:03 PM
Re: AI @Maryam that makes me think of another question: will the future see some standardization in AI applications and will there be some uniformity that results from collaborative enterprises like ONAP?
[email protected] 8/23/2017 | 2:15:36 PM
Re: AI Ariella, I think that all the tech companies are worried about keeping pace with the AI flood planned by their competitors so they are all looking similar I don't think they will differentiate their AI offerings until the market gets a little more mature and we understand the regulations and implications of AI.
Ariella 8/23/2017 | 12:42:03 PM
AI It seems that each company uses slightly different terminology. What Microsft describes here sounds very similar to what AT&T calls hyper-automation, one aspect of which is closed-loop automation.
Sign In