A Supercomputer Small Enough to Fit in Your Cloud
For decades, physical high-performance computing (HPC) systems and supercomputers kept getting bigger and bigger with the power to crunch even larger data sets, whether that meant predicting the weather or protecting nuclear arsenals.
Across all those years, the Top 500 list, produced semiannually to coincide with the International Supercomputing Conference and the ACM/IEEE Supercomputing Conference, helped offer the best analysis of how large the supercomputer and HPC markets were growing.
The most recent list, published in November, shows that the Sunway TaihuLight, a supercomputer developed by China's National Research Center of Parallel Computer Engineering & Technology (NRCPC) is the world's top- ranking chine mawith a performance of 93 petaflops. (One petaflop is one quadrillion -- 1 million billion -- floating point calculations per second.) (See China Overtakes US in Latest Top 500 Supercomputer List.)
Besides showing that China has not eclipsed the US in terms of sheer number of supercomputers on the list, the latest Top 500 also shows that these machines are creeping toward an even loftier goal -- an exascale system capable of supporting a billion billion calculations a second, or one exaflop. (See US Energy Department Aims for Exascale.)
However, there's a problem with all this compute power: Who can really use it?
Supercomputers and other HPC systems are usually designed and sold to large government agencies, or research universities with the cash to pay for such massive machines, as well as the ongoing maintenance costs. What good is a large-scale research project if you don't have the money to run your model on one of these?
Then there's the whole issue of the infrastructure. Who wants to support such as a large system when more and more everyday computing is being moved to the cloud?
This is where supercomputing-as-a-service -- the other SaaS -- is starting to gain some traction. Why buy a HPC system or supercomputer when it's easier to let someone else worry about the hardware and your staff can run models and other calculations as you need.
It's an idea that's appealing to Cray Inc. (Nasdaq: CRAY), which is one of the largest, traditional developers of supercomputing systems in the world. The company recently teamed with Microsoft Corp. (Nasdaq: MSFT) Azure to offer supercomputing power through the cloud. This new type of cloud-based HPC offering is expected to advance research into several different areas, including artificial intelligence, autonomous vehicles and medical imaging.
"Dedicated Cray supercomputers in Azure not only give customers all of the breadth of features and services from the leader in enterprise cloud, but also the advantages of running a wide array of workloads on a true supercomputer, the ability to scale applications to unprecedented levels, and the performance and capabilities previously only found in the largest on-premise supercomputing centers," Cray CEO Peter Ungaro, noted in a blog post during the announcement.
Now, running a CRM application in the cloud, and programing a complex model to calculate the effects of climate change with thousands or even millions of data points are two very different asks.
However, it's the way the world is moving -- to the cloud -- so why would supercomputing be any different?
A November 6 report from analyst firm IDC found that cloud spending in the first half of 2017 totaled more than $63 billion. Of that number, more was spent on software-as-a-service -- about $40 billion -- than platform-as-a-service (PaaS) and infrastructure-as-a-service (IaaS) combined -- about $20 billion.
The point is that most of the software an enterprise, a government agency or a research university uses is moving to the cloud and the world is growing more and more comfortable with that shift. It's seems logical that supercomputing should move the same way and that others will likely join Cray and Microsoft in these type of ventures.
While supercomputing-as-a-service might not hold the same excitement as building one of the Top 10 machines that place on the Top 500 list, it also means that the technology becomes a lot more democratic and accessible to many more businesses and researchers. That should count for a lot, too.