Google Cloud's TPUs have competitive edge in race to rival NVIDIA's AI chips – Omdia

2024 is likely to see as much as $6 billion worth of Google Cloud TPUs shipped to the company's data centers. #pressrelease

June 13, 2024

1 Min Read

A new Omdia research report, Checking in with hyperscalers' AI chips: Spring 2024 finds that Google has taken a clear lead among the hyperscale cloud providers when it comes to their efforts to compete with NVIDIA in AI hardware. 2024 is likely to see as much as $6bn worth of Google Cloud TPUs shipped to the company's data centers, where they support both in-house projects such as Gemini, Gemma, and, Search, as well as customer workloads through Google Cloud Platform.

All three of the major hyperscale players now have a custom AI accelerator chip, but details of their commercial success or failure tend to be closely held. However, the hyperscalers all use at least one of a group of companies that specialize in semi-custom silicon projects, such as Broadcom, Marvell, Alchip, or Arm plc's Neoverse CSS service. Close examination of their financial reporting and public statements makes it possible to identify the customers and link them to these outsourcing partners’ revenue numbers.

As such, Omdia finds that the Google Cloud TPUs are doing distinctly better than their competitors, such as Microsoft Azure, Amazon Web Services, or Meta Platforms.

One thing yet to be resolved, though, is the identity of "Customer C", a US-based cloud computing company whose AI chip is set to ramp in 2026 and that is not one of the three majors.

Read the full press release here.


Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like