What is Supercomputing?


What is Supercomputing?

HPC Basics Series

Supercomputing and high-performance computing (HPC) is often used interchangeably in the computing community.  So, for this purpose, we’ll stick with that convention and use the term “supercomputing/supercomputer”.

When you think of computing, chances are you first think about your desktop, laptop, or mobile device.   These devices are used to do tasks that require a fairly low amount of computing – surfing web pages, doing email, writing documents on a word processor, running a spreadsheet, and the like.  For your experience to be satisfactory, the chipset needs to be sufficiently fast enough to power the application in a manner that is lag free and the memory footprint needs to be large enough to avoid swapping between RAM and disk.  For home or small business users, this is adequate and available at a variety of price points.  For the commodity cloud business user, this is adequate to run payroll, a website, or some sort of corporate databases.  

For engineers, scientists, animators, data scientists, analysts, and anyone doing exploratory work that require huge data sets or massive compute power, the commodity compute solutions provided by public clouds is inadequate, primarily in terms of speed as part of the supercomputing solution entails specialized networking hardware and connectivity between compute nodes in addition to hardware accelerators for specific types of calculations, particularly linear algebra and matrix-based calculations.  

As you can see, supercomputers are special beasts tuned and designed from the ground up to handle oceans of data very quickly. They are at the complete opposite end of the spectrum from the typical commodity machine found in typical public clouds.

Supercomputers are most often used to solve problems in the fields of scientific computing and engineering, and usually, the problems they are targeting require billions or trillions of calculations.  Typical supercomputing uses are for calculating fine-grained mechanical engineering calculations, calculations for fluid dynamics, such as simulations like airflow over an airplane, or extracting data from seismic studies to locate oil.  These aren’t the only use cases for supercomputing.  Mining big data has also found a home with supercomputers allowing data scientists to make sense out of the huge oceans of data collected when we shop online.  

The modern supercomputers are groups of interconnected computing nodes that can communicate with one another and are highly optimized with hardware-based accelerators, such as GPUs and FPGAs, and very high-speed networks that deliver data to those nodes.  They are the automotive equivalent of race cars.  They are purpose-built and highly optimized to solve a particular set of problems.  Unlike the typical data center server running on Microsoft Windows, a supercomputer will run primarily on an optimized operating system such as Linux.

Supercomputers are what enables Nimbix cloud users to handle and compute on volumes of data at speeds hundreds to millions of times faster than on a typical data center server, ultimately allowing scientists and engineers to gain insights and make discoveries that otherwise would be impossible.

Tune in every Tuesday through April 10th for a new HPC Basics blog post.