Share this:

lightning_strike_by_samreay-d2z1ebeOnce upon a time if you needed to do any serious computation, you had to get in line.  Supercomputing systems were monolithic, specialized, extremely expensive, and generally unavailable.  Since then, the emergence of large-scale clustering combined with ever improving commodity platforms has ramped up capacity and availability to previously unimagined levels.

Not surprisingly, demand has exploded, far outpacing supply.  What was once purely the domain of rocket science (figuratively and literally) is now everything from healthcare and wellness to social media and entertainment.  It seems every 3rd word one hears in passing is “analytics”, which typically implies massive amounts of online data.  Obviously producing any meaningful results in reasonable time requires High Performance Computing.  We have seen the thirst for mass scale computation increase exponentially relative to the availability of platforms to provide it.  Something has to give.

A new megatrend is joining forces to create the “perfect storm” that is already changing the game for HPC: the Cloud.  As supercomputing applications increasingly deploy on commodity hardware and operating systems, such as Linux clusters, migrating to cloud delivery models such as Infrastructure as a Service (IaaS) can dramatically increase scale.  But there’s a catch: just because you can use commodity platforms doesn’t mean this creates the best economics.  In fact, we’re learning that as we scale, the compute time and cost for cloud HPC quickly skyrocket to unacceptable levels in typical pay-per-use IaaS models.  We have seen this story before in other areas, such as desktop virtualization in IT (e.g. VDI), where the costs of running end user applications on platforms designed for transactional workloads literally shocked the market into finding other solutions.  Any time you apply scale to a problem, it magnifies its shortcomings.  When time and money are involved, it’s a nonstarter.  Given the advances in science and society that HPC is driving, the world cannot afford for it to slow down.

Enter the real game changer: the purpose-built, accelerated cloud.  Now we have a platform that’s designed from the ground up for cloud HPC applications rather than generic infrastructure repurposed from spare capacity.  The result?  An order of magnitude in savings – the proverbial “faster, better, cheaper”.  Instead of dealing with generic “machines” to string together into clusters, you simply submit batch jobs to crunch your data, and let the platform automatically process them with the most efficient configuration.  Since no one is running a streaming video web server on the same hardware at the same time, for example, your cloud HPC application doesn’t compete for cycles to get the job done.  With Nimbix NACC, we provide this in a pay-per-use model that can burst as fast as you can consume it – whether the application leverages CPU, GPU, FPGA, or any combination of the above.

The union of cloud computing, widely available accelerated hardware, and commodity platforms has opened up supercomputing to a myriad of domains.  Now, the purpose-built HPC cloud is bringing it to the masses.  We can’t wait to see the world around us evolve as a result.