The research cloud will run OpenStack, an example of that software's future in specialized cloud settings, including high-performance computing, says Mirantis co-founder and CMO Boris Renski.

Mitch Wagner, Executive Editor, Light Reading

August 2, 2018

4 Min Read
Mirantis & Fujitsu Building 4-Petaflop Supercomputer & Research Cloud in Japan

Mirantis and Fujitsu are teaming up to build a 4-petaflop supercomputer and OpenStack-based cloud at the Information Initiative Center of Hokkaido University in Japan. The system is expected to go online in December.

The Information Initiative Center provides facilities for academic research by university faculty, grad students and researchers throughout Japan. The Hokkaido University Academic Cloud's new supercomputer will improve the processor performance of the existing system by 20x, and the nationwide cloud system will be used for research on clouds and networks. The new facilities will be used for scientific and technological simulations, AI, big data and data science.

The cloud will be based on Mirantis Cloud Platform, a service providing support for Kubernetes and OpenStack.

The supercomputer will comprise two subsystems. Subsystem A will include 1,004 Fujitsu Server Primergy CX2550 M4 x86 servers running Intel Xeon Scalable processors. Subsystem B will run 288 Fujitsu Server Primergy CX1640 M1 x86 servers running Intel Xeon Phi processors. The nodes will be connected over Intel Omni-Path Architecture high-speed interconnect for highly parallel performance.

Figure 1: The future of OpenStack is in specialized cloud settings, including high-performance computing, says Mirantis co-founder and CMO Boris Renski. The future of OpenStack is in specialized cloud settings, including high-performance computing, says Mirantis co-founder and CMO Boris Renski.

Now entering its fifth year, the 2020 Vision Executive Summit is an exclusive meeting of global CSP executives focused on navigating the disruptive forces at work in telecom today. Join us in Lisbon on December 4-6 to meet with fellow experts as we define the future of next-gen communications and how to make it profitable.

The cloud system will run 64 Fujitsu Server Primergy RX2540 x86 servers M4 servers with Nvidia Tesla V100 GPU computing cards. Cloud systems will be deployed to Hokkaido University and seven more remote sites in the Kanto, Kansai and Kyushu regions of Japan. The servers will connect to the SINET5 academic backbone network, extending from Hokkaido to Kyushu.

The infrastructure will be divided into three zones, one for testing virtual machines; another for bare metal, using OpenStack Ironic; and a third for bare metal with Ironic and GPUs to improve performance.

Hokkaido has been working on supercomputing technology since 1963 (no, that's not a typo -- 55 years). Its previous supercomputer was based on CloudStack with Hitachi hardware, Boris Renski, Mirantis CMO and co-founder, tells Light Reading.

The 4-petaflop benchmark would put the supercomputer between 35th and 36th most powerful in the world, according to June rankings from Top500.org, which ranks the most powerful high-performance computers in the world. (See US Topples China for Top Supercomputer Bragging Rights.)

The deal is a step in OpenStack's evolution from its original vision as a general-purpose computing infrastructure -- and Amazon Web Services killer -- to a platform for specialized applications, Renski says. (See Mirantis Has Seen the Future (Again) & This Time It's Spinnaker.)

"We at Mirantis believe the majority of general-purpose infrastructure is going to public cloud. The use case for on-premises, and solutions like OpenStack, is in the purpose-built infrastructure that is specifically tuned to particular business cases," Renski said.

In addition to high-performance computing, other specialized use cases that are a good fit for OpenStack include network functions virtualization (NFV), the edge cloud and ultra-large-scale environments, with more than 1,000 compute nodes. "When you start pushing 1,000 nodes, the economics makes sense for on-premises footprints," Renski says. These big environments are installed at software-as-a-service vendors, big ecommerce websites, and verticals that have been heavy users of compute infrastructure, such as financial services. "Guys that need a lot of compute often have their own infrastructure," Renski says.

Related posts:

— Mitch Wagner Follow me on Twitter Visit my LinkedIn profile Visit me on Tumblr Follow me on Facebook Executive Editor, Light Reading

Read more about:

Asia

About the Author(s)

Mitch Wagner

Executive Editor, Light Reading

San Diego-based Mitch Wagner is many things. As well as being "our guy" on the West Coast (of the US, not Scotland, or anywhere else with indifferent meteorological conditions), he's a husband (to his wife), dissatisfied Democrat, American (so he could be President some day), nonobservant Jew, and science fiction fan. Not necessarily in that order.

He's also one half of a special duo, along with Minnie, who is the co-habitor of the West Coast Bureau and Light Reading's primary chewer of sticks, though she is not the only one on the team who regularly munches on bark.

Wagner, whose previous positions include Editor-in-Chief at Internet Evolution and Executive Editor at InformationWeek, will be responsible for tracking and reporting on developments in Silicon Valley and other US West Coast hotspots of communications technology innovation.

Beats: Software-defined networking (SDN), network functions virtualization (NFV), IP networking, and colored foods (such as 'green rice').

Subscribe and receive the latest news from the industry.
Join 62,000+ members. Yes it's completely free.

You May Also Like