Intel has seriously upped its game in server processors, introducing the new Xeon Scalable line today as it tries to ward off increasing competition from the likes of nVidia.
Xeon Scalable -- previously code-named Skylake -- isn't a routine new-chip announcement from Intel. It represents a bigger generation-to-generation jump than normal, a case of Intel Corp. (Nasdaq: INTC) leapfrogging a step in its usual rhythm. That gives the Xeon Scalable some impressive stats, but more importantly, it sets the chip up for some promising high-end markets that rivals are eyeing closely.
Artificial intelligence, blockchain and NFV are on that list. In an event webcast from New York today, Intel paraded customers already using Xeon Scalable in production for those and other applications.
In the case of NFV, it was AT&T's John Donovan, chief strategy officer, who came on stage to extol the new chips. In production environments, AT&T is seeing a 30% performance improvement over previous Xeons, he said.
He said the real surprise, though, had more to do with AT&T itself, considering the carrier got hold of Xeon Scalable only in March. "It's unthinkable for us a year ago to get your latest technology and, within a matter of weeks, get that into production," Donovan said.
As the data center grows in importance, Intel's default status as the top chipmaker is coming under fire. Longtime rival Advanced Micro Devices Inc. (NYSE: AMD) is still in the mix, and chips based on the ARM architecture began pecking away at the data center a few years ago.
The biggest threat is arguably Nvidia Corp. (Nasdaq: NVDA). For seven quarters in a row, the company has increased its data center revenue. For its first quarter, which ended in April, Nvidia reported data center revenues of $409 million -- 21% of the company's total revenues and three times the level it saw a year prior.
Nvidia is also aggressively pursuing AI, where its graphics chips are proving useful. Intel is trying to counter, not just with Xeon but with last year's acquisition of deep learning specialist Nervana.
So, Intel has plenty of motivation to reassert its data center prowess and to highlight Xeon Scalable's uses among a variety of hot-topic uses.
Hence, Peter Marsden of Thomson Reuters got on stage to talk about using Xeon Scalable not just for analytics but for blockchain-based access to financial information, a project undertaken with startup R3. (Intel has not only invested in R3 but also announced today a new technology collaboration with the startup.)
On the AI front, Xeon Scalable "can do both training and inference and do them well," said Lisa Spelman, vice president of marketing for Intel's Data Center Group. The ability to do both is important. Training refers to the mass-consumption rote learning of machine learning. But deep learning requires inference, which requires a higher volume of data and also runs real-time inside a given workload, Spelman says. It's harder, in other words.
To support its AI claims, Intel presented a short video testimonial from Amazon Web Services Inc. -- both AWS and Google Cloud were among the pre-launch Xeon Scalable users -- and brought out Bronx-based Montefiore to discuss using deep learning in healthcare.
The leapfrog nature of Xeon Scalable's performance comes from several factors. One of the most interesting has to do with the way Intel networks between the processor cores.
This will sound familiar to old fiber jockeys: Intel typically used a ring architecture to connect the cores on a chip. When the number got to 28 for Xeon Scalable -- that's a lot, by the way -- the ring wasn't efficient enough. So, Intel is now using a mesh.
The Xeon Scalable also makes use of an instruction set called AVX-512 that's typically found in high-performance computing. This is particularly useful in security, where the faster instruction set helps lessen the burden that encryption typically adds, Spellman said.
— Craig Matsumoto, Editor-in-Chief, Light Reading