ARM has introduced its third-generation backplane in support of its ARMv8 architecture, designed to provide the performance demanded by next-generation applications, including virtual reality and autonomous vehicles.

Brian Santo, Senior editor, Test & Measurement / Components, Light Reading

September 27, 2016

2 Min Read
ARM Intros New Coherent Backplane

ARM Ltd. has introduced its third-generation backplane in support of its ARMv8 architecture, including on-chip interconnect and a memory controller.

The two pieces of intellectual property are, respectively, the ARM CoreLink CMN-600 Coherent Mesh Network and ARM CoreLink DMC-620 Dynamic Memory Controller. They are available immediately.

Applications that ARM is looking at enabling include networking, server, storage, automotive, industrial and high-performance computing (HPC). Other imminent use cases include virtual reality and autonomous vehicles. These are all placing new stresses on the network in terms of demand for higher bandwidth, lower latency, for sustaining constant bit rates, and for more resources (including memory) at the edge.

The new architecture is designed to deliver those performance improvements. The idea is to enable a more efficient, scalable and coherent mesh architecture including ARM processors, nearly any type of accelerator (DSP, GPU, FPGA, etc.), controllers and system cache, using the new mesh network IP to tie it all together.

The new memory controller is designed to work within this mesh architecture, providing access to DDR4 memory. The architecture includes a new approach to memory, which ARM is calling agile system cache, which has a number of advantages including the ability to scale capacity and bandwidth in response to system requirements.

Want to know more about communications ICs? Check out our comms chips channel here on Light Reading.

ARM can create an SoC with 32, 64 or 128 cores. ARM and TSMC have a partnership to develop ICs at 7nm, and the company believes a 64-core version will hit a "sweet spot" for that manufacturing technology in terms of die size and yield.

Moving from 16 cores to 32 cores roughly doubles the compute power and throughput. Moving from 16 to 64 cores with the new mesh architecture based on the two new IP elements will provide a 6X bump in compute power, a 5X improvement in throughput, while providing the fastest path to DDR4 memory -- a reduction of up to 50% in latency.

The new architecture supports the CCIX initiative, a proposal for cache coherent interconnect acceleration (CCIX) backed by ARM, AMD, IBM, Xilinx, Mellanox, Qualcomm and Huawei.

— Brian Santo, Senior Editor, Components, T&M, Light Reading

Read more about:

Europe

About the Author(s)

Brian Santo

Senior editor, Test & Measurement / Components, Light Reading

Santo joined Light Reading on September 14, 2015, with a mission to turn the test & measurement and components sectors upside down and then see what falls out, photograph the debris and then write about it in a manner befitting his vast experience. That experience includes more than nine years at video and broadband industry publication CED, where he was editor-in-chief until May 2015. He previously worked as an analyst at SNL Kagan, as Technology Editor of Cable World and held various editorial roles at Electronic Engineering Times, IEEE Spectrum and Electronic News. Santo has also made and sold bedroom furniture, which is not directly relevant to his role at Light Reading but which has already earned him the nickname 'Cribmaster.'

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like