Sponsored By

Equinix rolls out managed AI service with Nvidia

Equinix, in collaboration with Nvidia, has rolled out a fully managed private cloud service that allows companies to quickly build and run their own massive AI models.

Gigi Onag

January 25, 2024

3 Min Read
Interior of a data center
(Source: Brett Sayles/Pexels)

Data center operator Equinix, in collaboration with Nvidia, has rolled out a fully managed private cloud service that allows companies to quickly build and run their own massive AI models.

Under the partnership, Equinix would install and operate a company's privately owned Nvidia infrastructure at its International Business Exchange data centers. Corporate customers will buy Nvidia systems and pay Equinix to operate them on their behalf.

The service, which is now commercially available, is built on Nvidia DGX systems, Nvidia networking as well as Nvidia AI software.

According to Charles Meyer, president and CEO of Equinix, companies need adaptable, scalable hybrid infrastructure in their local markets to bring AI supercomputing in their data.

"Our new service provides customers a fast and cost-effective way to adopt advanced AI infrastructure that's operated and managed by experts globally," he said, adding that the new service enables customers to operate their AI infrastructure in close proximity to their data.

DGX systems housed within Equinix's data centers are connected to the outside world through a high-speed private network, and the company also provides high-bandwidth interconnections to cloud services and enterprise service providers.

With their Nvidia partners in tow, the Equinix managed services team undertook comprehensive training on how to build and operate the AI systems.

"Generative AI is transforming every industry," said Jensen Huang, founder and CEO of Nvidia. "Now, enterprises can own Nvidia AI supercomputing and software, paired with the operational efficiency of Equinix management, in hundreds of data centers worldwide."

Without disclosing names, Equinix said there are already enterprise customers using the new managed services – many of whom belong in industries such as biopharma, financial services, software, automotive and retail.

These customers are building AI Centers of Excellence to provide a strategic foundation for a broad range of large language model (LLM) use cases. These include accelerating time to market for new medications, developing AI copilots for customer service agents and building virtual productivity assistants.

Owning their AI infrastructure

In October, IDC predicted enterprise spending on generative AI (genAI) software, infrastructure hardware and IT services would reach nearly $16 billion worldwide for 2023, and will grow to $143 billion in 2027.

The technology research firm said generative AI investments will follow a natural progression over the next several years as organizations transition from early experimentation to aggressive buildout with targeted use cases to widespread adoption across business activities with an extension of genAI use to the edge.

Equinix's managed AI deal with Nvidia comes at a time when companies in Asia-Pacific are showing interest in owning their AI computing system for privacy and security reasons.

"Today, as we talk to enterprise customers around the world, one of their number one concerns and ideas around AI is being able to own their own model and really own their own future," Charlie Boyle, Nvidia vice president of DGX systems, said during a press briefing yesterday.

In Asia-Pacific, he added that many companies are rapidly expanding their use of the technology. But they don't have the in-house expertise to build out their own large language models.

He pointed out that most companies need to be very close to the AI processing that they're trying to accomplish.

"The AI model, the AI execution has to be very close to the data. And all those elements come together in customers wanting to do AI, wanting to do it fast, securely, and near their data. But many times they're lacking either the data center space or the internal expertise of how to manage all of that," Boyle said.

Read more about:


About the Author(s)

Gigi Onag

Senior Editor, APAC, Light Reading

Gigi Onag is Senior Editor, APAC, Light Reading. She has been a technology journalist for more than 15 years, covering various aspects of enterprise IT across Asia Pacific.

She started with regional IT publications under CMP Asia (now Informa), including Asia Computer Weekly, Intelligent Enterprise Asia and Network Computing Asia and Teledotcom Asia. This was followed by stints with Computerworld Hong Kong and sister publications FutureIoT and FutureCIO. She had contributed articles to South China Morning Post, TechTarget and PC Market among others.

She interspersed her career as a technology editor with a brief sojourn into public relations before returning to journalism joining the editorial team of Mix Magazine, a MICE publication and its sister publication Business Traveller Asia Pacific.

Gigi is based in Hong Kong and is keen to delve deeper into the region’s wide wild world of telecoms.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like