Featured Story
Huawei defies US to grow market share as RAN decline ends – Omdia
The worst is now behind vendors in the market for mobile network equipment, with Omdia forecasting slight growth outside China this year.
ARM has introduced its third-generation backplane in support of its ARMv8 architecture, designed to provide the performance demanded by next-generation applications, including virtual reality and autonomous vehicles.
ARM Ltd. has introduced its third-generation backplane in support of its ARMv8 architecture, including on-chip interconnect and a memory controller.
The two pieces of intellectual property are, respectively, the ARM CoreLink CMN-600 Coherent Mesh Network and ARM CoreLink DMC-620 Dynamic Memory Controller. They are available immediately.
Applications that ARM is looking at enabling include networking, server, storage, automotive, industrial and high-performance computing (HPC). Other imminent use cases include virtual reality and autonomous vehicles. These are all placing new stresses on the network in terms of demand for higher bandwidth, lower latency, for sustaining constant bit rates, and for more resources (including memory) at the edge.
The new architecture is designed to deliver those performance improvements. The idea is to enable a more efficient, scalable and coherent mesh architecture including ARM processors, nearly any type of accelerator (DSP, GPU, FPGA, etc.), controllers and system cache, using the new mesh network IP to tie it all together.
The new memory controller is designed to work within this mesh architecture, providing access to DDR4 memory. The architecture includes a new approach to memory, which ARM is calling agile system cache, which has a number of advantages including the ability to scale capacity and bandwidth in response to system requirements.
Want to know more about communications ICs? Check out our comms chips channel here on Light Reading.
ARM can create an SoC with 32, 64 or 128 cores. ARM and TSMC have a partnership to develop ICs at 7nm, and the company believes a 64-core version will hit a "sweet spot" for that manufacturing technology in terms of die size and yield.
Moving from 16 cores to 32 cores roughly doubles the compute power and throughput. Moving from 16 to 64 cores with the new mesh architecture based on the two new IP elements will provide a 6X bump in compute power, a 5X improvement in throughput, while providing the fastest path to DDR4 memory -- a reduction of up to 50% in latency.
The new architecture supports the CCIX initiative, a proposal for cache coherent interconnect acceleration (CCIX) backed by ARM, AMD, IBM, Xilinx, Mellanox, Qualcomm and Huawei.
— Brian Santo, Senior Editor, Components, T&M, Light Reading
Read more about:
EuropeYou May Also Like