Nvidia's fat margins are a worrying sign of its market power

The chipmaker seems to be the only beneficiary so far of all the investments being made to support generative artificial intelligence.

Iain Morris, International Editor

August 24, 2023

5 Min Read
Nvidia's fat margins are a worrying sign of its market power
Nvidia's Jensen Huang predicts the start of a new computing era.(Source: Nvidia)

If proof were needed that the hype about generative artificial intelligence is spinning wildly out of control, it has come from Nvidia. The chipmaker at the heart of the genAI action was expected to publish a good set of results, but not this good. Sales for the second quarter doubled, to $13.5 billion, compared with a year ago. Net profit was up an astonishing 843%, to $6.2 billion. Nvidia's share price, already up 229% so far this year, gained another 6.6% in after-hours trading.

During the earnings call, Jensen Huang, Nvidia's founder and CEO, left nitty-gritty questions about financial performance to Colette Kress, Nvidia's CFO, and did his best to feed the excitement with the sagacity of an old philosopher. "A new computing era has begun," he pronounced, according to a transcript issued by The Motley Fool. It's one, he said, that will be characterized by a shift from general-purpose to accelerated computing and the rise of genAI. And Nvidia is cleaning up.

Its latest results should raise a few eyebrows among customers, most of whom work for the hyperscalers buying Nvidia's graphical processing units (GPUs) in bulk. The most interesting detail is that Nvidia managed to double sales without incurring additional costs, explaining why its net income rocketed. This time last year, when nobody, including Huang, was talking about genAI, Nvidia reported a gross margin of 43%. A year later, it tops 70%.

Nvidia is now the tech equivalent of an oil producer during an energy shortage, seemingly able to charge what it wants and pocket outrageous profits. It's worse, in fact, because no single oil producer has ever been this dominant. Figures last year put Nvidia's share of the market for data-center GPUs at more than 90%, and it looks no less powerful today.

Ludicrous valuations

The problem for Nvidia is that nobody else in the genAI business is making any serious money from it. Richard Windsor, a former analyst with Nomura who now runs a research company called Radio Free Mobile, said in a blog this week that valuations attached to genAI startups are based on "ludicrous" ideas about consumer willingness to pay for their services. Unless there is a mass market of people happy to spend $20 each a month on genAI applications, the bubble could rapidly deflate. Eventually, that would catch up with Nvidia.

In the meantime, it is generating more than half its sales from a segment it calls CSPs (cloud services providers), meaning the likes of AWS, Google Cloud and Microsoft Azure. Another big chunk comes from "consumer Internet" companies, including Meta. But the overall pie of hyperscaler spending is not growing much, noted Vivek Arya, an analyst with Bank of America, on yesterday's earnings call. With Nvidia forecasting third-quarter sales of $16 billion (which implies a 170% increase on last year's figure), there is some concern about the sustainability of demand.

"There's about $1 trillion worth of data centers, call it a quarter of trillion dollars of capital spend each year," said Huang in response. "You're seeing the data centers around the world are taking that capital spend and focusing it on the two most important trends of computing today, accelerating computing and generative AI. And so, I think this is not a near-term thing."

Even so, what's unclear at this stage is how much the demand for Nvidia's products can realistically grow outside the hyperscaler sector. In the telecom market, for instance, Nvidia's pitch is about combining accelerated computing for the radio access network with artificial intelligence, all handled on the same GPU platform installed at the edge of the network. Japan's Softbank is the flagship customer (although it has shared no details of its plans), but other telcos, including Vodafone, have played down interest in managing genAI's large language models (LLMs) on their premises.

"The need for on-prem LLM – the business case is not there yet," said Scott Petty, Vodafone's chief technology officer, at a recent press briefing. "There may be areas where we have significant amounts of data or a need to develop our own LLM, but we're not there today and I think using the LLM capabilities of hyperscalers or companies that have invested will meet most of the use cases we have."

Nvidia's share price ($)6512.png(Source: Google Finance)

Nvidia also faces challengers on the networking side. Thanks to its $6.9 billion takeover of Mellanox in 2019, Nvidia dominates the market for Infiniband, a technology that provides data-center connectivity for high-performance computing infrastructure. Its efficiency makes it a "no brainer" for LLMs, said Huang yesterday. But Rosenblatt Securities thinks it will be slowly displaced for AI networking by Ethernet, due largely to that technology's scalability and multi-vendor ecosystem.

"For large-scale AI applications, requiring tens of thousands of GPUs to be connected, Ethernet is the most viable option," said Rosenblatt in a recent report. Nvidia is hedging its bets with Spectrum X, an Ethernet product it recently announced, but it will be up against rivals including Broadcom, Cisco and Marvell in this sector.

The implications of inference

If many telcos display little appetite for investing in GPUs, other organizations may feel differently, of course. For the first quarter, about 30% of Nvidia's revenues came from the enterprise segment, distinct from CSPs and consumer Internet players, although it was unwilling to share the percentage for the second quarter.

There might also be an opportunity with the emergence of what Matt Ramsey, an analyst with TD Cowen, called "large model inference." With this, a model would generate outputs based on new data after its initial training phase, known to require vast computing resources. Inference could feasibly make it easier to run AI applications outside hyperscaler-like facilities. There is even the possibility that AI processing shifts from cloud infrastructure to end-user devices, said Windsor in his blog.

That would inevitably make gadgets more expensive, he points out, but it could also improve the overall economics. Long before that happens, though, a lot of startups and business plans will probably fall apart. Nvidia will still be standing, but it might not look quite so resplendent when some genAI disillusionment kicks in.

Related posts:

— Iain Morris, International Editor, Light Reading

Read more about:

AsiaEuropeAI

About the Author(s)

Iain Morris

International Editor, Light Reading

Iain Morris joined Light Reading as News Editor at the start of 2015 -- and we mean, right at the start. His friends and family were still singing Auld Lang Syne as Iain started sourcing New Year's Eve UK mobile network congestion statistics. Prior to boosting Light Reading's UK-based editorial team numbers (he is based in London, south of the river), Iain was a successful freelance writer and editor who had been covering the telecoms sector for the past 15 years. His work has appeared in publications including The Economist (classy!) and The Observer, besides a variety of trade and business journals. He was previously the lead telecoms analyst for the Economist Intelligence Unit, and before that worked as a features editor at Telecommunications magazine. Iain started out in telecoms as an editor at consulting and market-research company Analysys (now Analysys Mason).

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like