Nimbix Blog

Super musing about all things supercomputing

Big Compute and its Role in Big Data

Written by: Leo Reiter on October 07, 2014

Big Data without Big Compute is just a large collection of unstructured information with no purpose and no real value. “Data” is the noun while “Compute” is the verb.  It’s not enough for the data to exist, we must derive value from it through computation – something commonly referred to as Analytics.

The Quantum Nature of Big Data

Big Compute

Schrödinger’s Cat

With traditional data, we simply query it to derive results.  All that we need is already stored in the data set itself.  For example, if we have a customer database with dates of birth, we simply fetch the list of customers who were born after a certain date.  This is a simple query, not a computation, and therefore not an analytic.

With Big Data, we’ve learned to distribute the information so that we can run complex analytics on it at scale.  The information itself is meaningless until computation occurs.  The reason we distribute the data sets is not because they are large, but because we want to leverage clusters of computers to run more than just simple queries.  In the Big Data model, the data itself doesn’t hold the answer – we have to compute it.  Think of this as a virtual “Schrödinger’s Cat”… it can mean anything until we actually look “in the box” – except we’re not asking the simple question “is it dead or alive”, but rather more complex ones such as, assuming it’s alive, what its future behavior might be.  Analytics, especially predictive ones, rely on patterns and their associations.  Because the data sets tend to change (grow) over time, the results of these complex computations will vary as well.

It’s perhaps a major understatement to associate the term Big Data with data itself, as it cannot really exist without Big Compute to make sense of it all.

Why Big Compute?

Big Compute implies one of two things.  It can be ordinary computing scaled across a massive parallel cluster, or it can be High Performance Computing (HPC).  The problem with the former is that it can only scale so far.  And, for it to really succeed, the data itself must be scaled just as wide.  This has practical challenges of systems management complexity and infrastructure constraints, such as networking and power.

High Performance Computing is a much more natural “Big Compute” because, while it scales well horizontally, it also packs a powerful per-unit punch.  This means that we can realize higher computation density with far fewer “moving parts.”  A good example is using GPUs for vector calculations.  Sure, you can do this with CPU cores alone, but you’ve only got around 8-16 in each typical server node.  Each GPU can have hundreds or even thousands of cores.  If you vectorize your calculations to take advantage of this, you can do far more work with far less power and management complexity (at scale) than if you had to spread it across dozens or even hundreds of CPUs (and the servers they live in.)

So why does Big Compute really matter?  Can’t simple algorithms on commodity compute already do predictive analytics?

The answer is of course yes, but there are 2 problems – one immediate and one future.

The immediate problem is that in many cases, the speed at which you get results matters just as much as the results themselves.  For example, if you are using analytics to improve e-commerce, the best time to do this is while the customer is engaged in a transaction.  Sure, there’s still value in following up with the customer later, after you’ve crunched the data, but why not take advantage of the fact he or she has credit card in hand at that very moment to show them what their friends are buying?

When you combine this with the fact that there may be thousands of concurrent transactions at any given time, oversubscribing commodity compute to perform predictive analytics won’t get you the results you need in time to maximize the value.

This is where Big Compute comes in – to perform the same operations thousands of times faster.  In many cases, the value of the data is sensitive to the amount of time needed to compute it.  There are many examples of this, e-commerce being only one popular one.  In other cases, the data set itself is changing (generally growing) rapidly.  If analytics take too long, the results may already be obsolete or irrelevant once delivered.  Big Compute powered by HPC is simply the fastest, most efficient way to derive value from data at scale – at the moment that you need it.

Which brings us to the future problem with commodity compute…

Innovation in Algorithms

How do you derive future value from the same data you have (or are collecting) today?  If we look at Big Data as a two part problem, one being storing the data and the other being computing analysis on it, then we quickly realize where the greatest potential for innovation is.  We know it’s not in storage because, although challenging, we’ve seen densities increase dramatically since the dawn of computing.  As a (crude) point of reference, a consumer can buy a 3 terabyte hard disk today for less than the cost of a 200 gigabyte one just 10 years ago.  Higher storage densities mean less infrastructure to manage information, and thus make it more practical over time as well (not just cheaper.)  So we can rest easy knowing that all things being equal, as the data sets grow, so will the storage to hold it all in a relatively cost effective way.

Obviously the most room for innovation is in the analytics algorithms themselves.  We will see both the speed and the quality of the computations increase dramatically over time.  Thanks to Big Compute, there’s no need to compromise.  Commodity compute is a non-starter for algorithms that are too complex to run quickly on it.

Just imagine the opportunities we’d miss if we avoided problems that are seemingly too hard to solve.  Big Compute makes it possible to run the most complex algorithms quickly, and the sky’s the limit when it comes to the types of analytics we’ll see as a result.  Big Compute will help Big Data evolve to not just be “bigger”, but to be far more meaningful than we can ever imagine.