MWC24 – Most investments into AI so far have focused on running the technology inside massive data centers. Intel and Qualcomm are working to change that.
In separate announcements here in Barcelona, the companies unveiled new products designed to run AI technology in edge computing scenarios – and on smartphones.
Their message is that AI should not be restricted to cloud computing platforms from Amazon, Microsoft and Google. Instead, the technology should be made available directly to network operators and everyday consumers.
"AI is everywhere," said Dan Rodriguez, GM of the network and edge solutions group at Intel, during a recent media briefing.
Such a strategy clearly favors the offerings from vendors like Intel and Qualcomm. Intel sells the chips that power many of the servers running inside operators' networks. Qualcomm, meanwhile, wants to sell those kinds of chips too, but its bread-and-butter business involves selling the chips that power end users' smartphones.
Both companies are keen to broaden the AI halo so that it can cover their products too.
Bringing AI to MWC
First up is Qualcomm, which has been touting the ability of its Snapdragon chipset platform to run AI. In its latest announcement, the company unveiled its "AI Hub" for developers. It includes "pre-optimized AI models" that developers can use to build text-, voice- and image-based AI applications for devices powered by its Snapdragon chips, according to the company.
"The future of generative AI is hybrid, with on-device intelligence working together with the cloud to provide greater personalization, privacy, reliability and efficiency," Qualcomm CEO Cristiano Amon said in a release.
Separately, Intel announced that its edge platform is now broadly available and can run a variety of AI technologies on its general-purpose servers, which means that more expensive graphics processing units (GPU) from suppliers like Nvidia are not required.
Catching the investment wave
The announcements from Qualcomm and Intel follow massive investments into AI-capable data centers, including those running Nvidia chips.
According to the latest figures from Synergy Research Group, AI helped drive enterprise spending on cloud infrastructure to $68 billion worldwide in the third quarter of 2023 up by a whopping $10.5 billion from the third quarter of last year.
Experts argue that much of that spending is focused on large language models (LLM), which require vast computing resources only available in large data centers.
"AI tasks don't normally have the same latency requirements as many other workloads, enabling hyperscalers to focus AI-oriented investments in their core data centers," John Dinsdale, an analyst with Synergy Research Group, told Light Reading last year.
However, when those learning models shift to an "inference" model geared toward the speedy delivery of AI capabilities, the situation could change. An inference AI model could favor an edge computing design that puts computing resources geographically closer to end users. Such a design would speed up the delivery of AI services by lowering latency.
Companies like American Tower and Akamai are now investing in more distributed computing resources in anticipation of that shift.