Intel Reinventing Xeon for AI – but Is It Too Late?

SANTA CLARA, Calif. -- In a market where it's playing catch-up, Intel is aiming to make its Xeon chip the standard processor for artificial intelligence, challenging entrenched incumbent Nvidia and several specialist emerging AI hardware companies.

Intel Corp. (Nasdaq: INTC) continues to optimize Xeon for artificial intelligence, and is working with both Amazon Web Services Inc. and Facebook to deploy Xeon for AI workloads, said Navin Shenoy, Intel executive vice president and general manager of the Data Center Group, speaking at the company's Data Centric Innovation Summit here on Wednesday.

Intel reported $1 billion in revenues during 2017 from customers running AI on Xeon processors in data centers, which represents rapid growth, Shenoy noted.

Intel expects more AI improvements with its upcoming Xeon processor, codenamed Cascade Lake, which is on track to ship at the end of the year. It will have an integrated memory controller to support Intel's new Optane persistent memory, security upgrades to protect against the Spectre and Meltdown vulnerabilities, and general enhancements that are expected to deliver an 11x performance improvement from the Skylake processor that is already shipping.

Now entering its fifth year, the 2020 Vision Executive Summit is an exclusive meeting of global CSP executives focused on navigating the disruptive forces at work in telecom today. Join us in Lisbon on December 4-6 to meet with fellow experts as we define the future of next-gen communications and how to make it profitable.

On Wednesday, Intel announced a new AI extension to Xeon that it calls Intel Deep Learning Boost, which will ship with the Cascade Lake processor this year.

Intel will follow Cascade Lake with Cooper Lake at the end of 2019, a 14 nanometer chip with performance improvements over its predecessor. And Ice Lake will take the chip's form factor down to 10 nanometers in 2020, Shenoy says.

Xeon was not well optimized for AI two years ago, but has since improved performance on inference -- actually using AI in production, as opposed to training -- by 5.4x in Skylake.

A new chip optimized for AI, the Intel Nervana NNP L–1000, is set for release in 2019. The chip will be optimized for memory, bandwidth, utilization and power; boast 3–4x the training performance of a first-generation NNP product; and provide high-bandwidth, low-latency interconnects, Intel says.

It will launch as the market starts to ramp up significantly: Intel believes the market for AI chips is worth about $2.5 billion per year currently, growing to $8-$10 billion by 2022 at a 30% compound annual growth rate (CAGR).

But Intel faces formidable competition as it tries to capture that spend, according to Ovum Ltd. principal analyst Michael Azoff. "Nvidia, with its GPUs, is the gorilla in the market for deep learning. Intel missed the boat a little on that," he said.

It's unclear whether Intel can catch up, with Nervana not coming until late next year, Azof said. And Nvidia Corp. (Nasdaq: NVDA) isn't the only competition here; startup Graphcore has an optimized AI processor due out next month that is set to provide high-bandwidth memory located close to compute functions to speed up performance for the data intensive requirements of artificial intelligence.

Analyst Linley Gwennap, president and principal analyst of the Linley Group, agreed with Azof, noting additional emerging competition from Wave Computing and Cerebras.

"Intel obviously has a lot of power to bring to the market, so it's not too late," Gwenapp said. "But it would have been better if it happened sooner."

Intel's Naveen Rao
Intel's Naveen Rao

Intel sees AI emerging as the "critical workload" for computing, stated Naveen Rao, Intel corporate vice president and general manager of the Artificial Intelligence Products Group. "Computing is AI. The original computers were built to recapitulate the abilities of the brain," Rao said.

AI development requires formidable expertise and is time-consuming, which is currently a limiting factor in market growth, Rao said. "Overall, we care about shrinking the development time of AI and getting it to market faster," he said.

Intel is working with open source communities to beef up the AI tool stack and speed up development. "Today, data scientists and AI researchers are a few thousand around the world. We want to get to enabling the true enterprise developer -- the 25, 35 million population, where we can actually see massive scale for adoption," Rao said.

To that end, Intel launched an AI builders program in May, which has 95 members across a number of vertical industries: healthcare; financial services; retail; transportation; news, media and entertainment; agriculture; legal and HR; and robotic process automation. The program also crosses horizontal technology functions: business intelligence and analytics; vision; conversational bots; AI tools and consulting; and AI platform-as-a-service.

Related posts:

— Mitch Wagner Follow me on Twitter Visit my LinkedIn profile Visit me on Tumblr Follow me on Facebook Executive Editor, Light Reading

Be the first to post a comment regarding this story.
Sign In