Intel desperate for an edge over Nvidia with AI inferencingIntel desperate for an edge over Nvidia with AI inferencing

At CES, Intel unveiled a new portfolio of edge computing silicon intended for AI inferencing at enterprise locations. But the company continues to flee from Nvidia's AI shadow.

Mike Dano, Editorial Director, 5G & Mobile Strategies

January 6, 2025

4 Min Read
Intel headquarters.
(Source: Intel)

Intel is hoping its latest chipset salvo against AI behemoth Nvidia will help rejuvenate its fortunes. But the trendlines are not promising.

At the CES trade show this week, Intel spent much of its time talking about its new chips for PCs and other such gadgets. That's not a surprise considering the Consumer Electronics Show is just that: for consumer electronics.

Intel wasn't alone in that consumer focus: Qualcomm for example talked about how its chips can add AI cognition into "smart home" appliances ranging from TVs to refrigerators. And Nvidia is expected to use its Monday evening keynote to hype AI-powered robots, according to the financial analysts at BofA Global Research. "The challenge in our view is ... making the products reliable enough, cheap enough and pervasive enough to spawn credible business models," they wrote.

But Intel also announced new "edge" silicon products, those intended for servers running in hospitals, retail stores, factory floors and other "edge" locations that sit between big data centers and end-user devices. Such locations are becoming increasingly important to telecom network operators hoping to sell AI capabilities, private wireless networks, security offerings and other services to those enterprise locations.

The pitch

"Intel has been powering the edge for decades," said Michael Masci, VP of product management in Intel's edge computing group, during a media presentation last week.

Now, according to Masci, AI is beginning to expand the edge opportunity through inference. 

Inferencing in AI refers to the process where a trained AI model makes predictions or decisions based on new data. It's essentially AI's ability to apply learned knowledge on fresh inputs in real-time. Edge computing plays a critical role here because it brings inferencing closer to users. This lowers latency (meaning, instant AI responses) and can also reduce bandwidth costs and ensure privacy and security.

"Companies want more local compute," said Intel's Masci.

During his presentation, Masci boasted of a growing number of AI-friendly Intel customers. Ad-tech company Quividi, he said, is using Intel silicon to do both AI and video processing. And Network Optix is achieving video monitoring services with lower latency and less power. 

Masci specifically called out Nvidia's chips, arguing Intel's new silicon lineup supports up to 5.8x faster performance and better usage per watt.

"AI inference at the edge is the next major hotbed for AI innovation and implementation," he said.

The competition

But Nvidia is already an AI – and inference – powerhouse. Company officials recently confirmed that 40% of Nvidia's revenues come from AI inference rather than AI training efforts in big data centers.

"Inference is super hard. And the reason why inference is super hard is because you need the accuracy to be high on the one hand. You need the throughput to be high so that the cost could be as low as possible, but you also need the latency to be low," explained Nvidia CEO Jensen Huang during his company's recent quarterly conference call.

"Our hopes and dreams is that someday, the world does a ton of inference," he continued. "And that's when AI has really succeeded, right? It's when every single company is doing inference inside their companies for the marketing department and forecasting department and supply chain group and their legal department and engineering, and coding, of course. And so we hope that every company is doing inference 24/7."

Nvidia's overall lead in AI is hard to dispute. In its July quarter the company notched $30 billion in revenue, an increase of 122%.

Meanwhile, Intel continues to struggle. The Wall Street Journal recently pointed out that Intel rival AMD surpassed Intel in 2024 in terms of revenue for chips that go into data centers. "This is a stunning reversal: In 2022, Intel's data-center revenue was three times that of AMD," according to the publication.

And Intel's AI chip, Gaudi, didn't meet its revenue target of $500 million by the end of 2024.

Intel's edge business – the one chasing AI inferencing – sits alongside the one it runs for telecom operations. Specifically, Intel's Edge and Automotive operations now sit in its Client Computing Group (CCG) business unit. Intel's chips for telecom operators reside inside its NEX business unit.

About the Author

Mike Dano

Editorial Director, 5G & Mobile Strategies, Light Reading

Mike Dano is Light Reading's Editorial Director, 5G & Mobile Strategies. Mike can be reached at [email protected], @mikeddano or on LinkedIn.

Based in Denver, Mike has covered the wireless industry as a journalist for almost two decades, first at RCR Wireless News and then at FierceWireless and recalls once writing a story about the transition from black and white to color screens on cell phones.

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like