Service Provider Cloud

IBM Cloud Taps Into Nvidia GPUs for AI & HPC

IBM is continuing to grow its partnership with Nvidia by offering additional GPU technology to customers to support artificial intelligence, high-performance computing and other high-throughput workloads.

Through its IBM Cloud, Big Blue is now offering customers access to Nvidia's Telsa V100 graphics processing units (GPUs), which the company introduced at this month's CES expo in Las Vegas and which offer featuring 125 TeraFLOPS (trillion floating point operations per second) of performance.

In a blog post published Wednesday, John Considine, the general manager of IBM's Cloud Infrastructure Services, wrote that the company is adding the Telsa V100 GPUs to its Nvidia line-up, which already includes P100, K80 and M60 models.

(Source: IBM)
(Source: IBM)

In addition to offering the Nvidia GPU technology through the cloud, Considine writes that customers can also access bare metal servers that can support up to two Tesla V100 PCIe GPU accelerators.

IBM Corp. (NYSE: IBM) and Nvidia Corp. (Nasdaq: NVDA) are each eager to cash in on the push toward AI and machine learning, and the two companies have positioned GPUs, and not more traditional x86 processors, as a more efficient and powerful way to support the compute-intense workloads.

"This new IBM Cloud service delivers near-instant access to the most powerful GPU technologies to date, enabling enterprises, data scientists and researchers from organizations including NASA Frontier Development Lab and SpectralMD to train deep learning models and create innovative cloud-native applications that address complex problems," Considine wrote in the January 31 post.

Want to hear more about the leading operator use cases for AI technologies? Join us in Austin from May 14-16 at the fifth annual Big Communications Event. There's still time to register and communications service providers get in free!

IBM is also using Nvidia's NVlink 2.0, a high-speed interconnect, with its AC922 Power Systems servers, which also include the Power9 processor, which are designed for companies looking to build out on-premises clouds for AI, HPC and workloads. (See IBM's Power9-Based AC922 System Designed for AI Workloads.)

IBM is not the only company looking to offer GPU technology through the cloud. Google recently announced that it would give customers access to what it calls "preemptible" Nvidia GPUs through its cloud that can be used in short, 24-hour bursts. (See Google Cloud Offering 'Preemptible' GPUs Plus Price Cut.)

Google (Nasdaq: GOOG) noted at the time that companies could access the GPU technology for AI and machine learning applications.

Related posts:

— Scott Ferguson, Editor, Enterprise Cloud News. Follow him on Twitter @sferguson_LR.

[email protected] 2/23/2018 | 4:57:11 PM
Re: Custom chips... Very true and even the big tech players are clawing at each other daily impacting there share and leadership in different markets. I don't think anyone would have predicted what Amazon is doing today across retail B2B and B2C. It will be interesting to see how they all evolve and use each other's technology for leadership in different markets.
mhhfive 2/23/2018 | 3:09:18 PM
Re: Custom chips... > " a bet on who can get to market sooner with the most practical solution..."

Certainly so. There's almost no way to predict what will be the "best" solution as tech developments happen. So not having all the eggs in one basket is part of making sure you're not totally left behind. Only disruptors can afford to bet everything on a single approach -- and Google/IBM/Apple/etc aren't startups anymore. 
[email protected] 2/21/2018 | 12:11:00 PM
Re: Custom chips... It may be a bet on who can get to market sooner with the most practical solution many companies make simultaneous investments when they are not sure they can bring a technology to market quickly enough to impact their competitive position.
mhhfive 2/5/2018 | 3:11:34 PM
Custom chips... Google and IBM both have their own custom chips for AI tasks, so it's interesting to see that they're still supporting Nvidia hardware over their own. Sure, their own AI hardware is pretty immature, but it's an interesting strategy to support other hardware and at the same time develop proprietary solutions. It's all part of the "we don't know what will happen, so let's throw everything at the wall to see what sticks" strategy that companies are adopting more and more (when they have the resources to do so).
[email protected] 1/31/2018 | 10:27:30 PM
Re: Access Scott interesting so I expect the pricing will be tiered based on needs?
Susan Fourtané 1/31/2018 | 7:28:27 PM
Autonomous vehicles Nvidia’s is also advancing in the autonomous vehicle space thanks to GPU and advanced machine learning. A vehicle automatically taught itself by watching how a human drove. This, of course, in a responsible environment where the human was doing things in the right way. As always, how autonomous entities will react and act in the future depends entirely of how human feeds the source of knowledge.
Susan Fourtané 1/31/2018 | 7:13:48 PM
Holodeck GPU certainly changes the way we work. One of my favourite Nvidia projects in the Holodeck. It’s like Star Trek’s Holodeck but instead of being for recreational purposes it’s for collaboration. Holodeck is going to be useful for companies to simulate and experiment without having to build the physical elements. It was demostrated using an example of a car manufacturer at Nvidia’s GPU Conference, and also at a VR conference in London last year.
Scott_Ferguson 1/31/2018 | 3:39:39 PM
Re: Access @maryam: It helps with the pricing. You set your workload and run your apps and then move on. It's more for research than for say your everyday run of the mill AI apps.

[email protected] 1/31/2018 | 3:17:31 PM
Access Scott why is Google limiting access to 24-hour bursts?
Sign In