Share this:
Deriving value through computation

By Leo Reiter | November 14, 2014


What’s Big Data without Big Compute? Basically just a large collection of unstructured information with little purpose and value.  It’s not enough for data just to exist, we must derive value from it through computation – something commonly referred to as analytics.

The Quantum Nature of Big Data
With traditional data, we simply query it to derive results; all we need is currently stored within the data set itself.  For instance, for a customer database with dates of birth, we may just fetch the list of customers who were born after a certain date.  This is a simple query, not a computation, and therefore cannot be considered analytics.

However, with Big Data, we can distribute the information so that we can run complex analytics on it at scale.  Unlike traditional data, the information itself has little meaning until we process it.  The reason we distribute the data sets is not because they are large, but because we want to leverage clusters of computers to run more than just simple queries.  That is why in the Big Data model, the data itself doesn’t hold the answer – to unleash it we must compute it.  Think of this as a virtual “Schrödinger’s Cat“… it can mean anything until we actually look “in the box.”  The difference is that we’re not asking a simple question such as, “is it dead or alive,” but rather more complex inquiries such as, “assuming it’s alive, what might its future behavior be?”  Analytics, especially predictive ones, rely on patterns and their associations.  Because the data sets tend to change (or grow) over time, then we have to understand that the results of these complex computations will most definitely vary as well.

Knowing this, it’s perhaps a major understatement to associate the term “Big Data” with data itself since it cannot really exist without Big Compute to make sense of all the information.

What’s So Special About Big Compute?
Big Compute implies one of two things:

  1. Ordinary computing scaled across a massive parallel cluster
  2. High-Performance Computing (HPC)

The problem with the former is that it can only scale so far before performance drops off.  Furthermore, for it to really succeed, the data itself must be scaled just as wide.  This also brings with it practical challenges of systems management, complexity and infrastructure constraints such as networking and power.

High-Performance Computing is a more natural form of “Big Compute” because it scales and packs a powerful per-unit punch.  What does this mean? Simply that we can realize higher computation density with far fewer “moving parts.”  A good example is using Graphics Processing Unit (GPUs) for vector calculations.  Sure, you can do this with Central Processing Unit (CPU) cores alone, but you’ve only got around 8-16 in each typical server node.  Each GPU can have hundreds or even thousands of cores.  If you vectorize your calculations to take advantage of this, you can do far more work with far less power and management complexity (at scale) than if you had to spread it across dozens or even hundreds of CPUs (and the servers they live in).

So this begs the question: Why does Big Compute really matter?  Can’t simple algorithms on commodity compute already do predictive analytics?

The answer is of course yes, but there are two problems – one immediate and one future.

The immediate problem is that in many cases, the speed at which you get results matters just as much as the results themselves.  For example, if you are planning on using analytics to improve e-commerce, the best time to do this is while the customer is engaged in a transaction. Sure, there’s still value in following up with the customer later, after you’ve crunched the data, but why not take advantage of the moment while he or she has credit card in hand to provide facts that may increase spend?

When you combine this with the fact that there may be thousands of concurrent transactions at any given time, over-subscribing commodity compute to perform complex analytics won’t get you the results you need in time to maximize the value of Big Data.

This is where Big Compute can perform the same operations thousands of times faster.  In many cases, the value of the data is sensitive to the amount of time needed to compute it.  There are many examples of this – e-commerce is just a popular one.  In other cases, the data set itself is changing (generally growing) rapidly.  If analytics take too long, the results may already be obsolete or irrelevant once delivered.

Simply put, Big Compute powered by HPC is the fastest, most efficient way to derive value from data at scale – at the exact moment needed.

This brings us to the future problem with commodity compute.

Innovation in Algorithms
How do you derive future value from the same data you have (or are collecting) today?  If we look at Big Data as a two-part problem – storing the data and analyzing the data – we quickly realize where the greatest potential for innovation is.  It’s not in storage because, although challenging, we’ve seen densities increase dramatically since the dawn of computing.  As a (crude) point of reference, a consumer could buy a three-terabyte hard disk in 2014 for less than the cost of a 200 gigabyte one just 10 years prior.  Higher storage densities mean less infrastructure to manage, and thus make storing large data sets more practical over time as well (not just cheaper.)  So we can rest easy knowing that all things being equal, as the data sets grow, so will the storage to hold it all in a relatively cost-effective way.

Obviously the most room for innovation is in the analytics algorithms themselves.  We will see both the speed and the quality of the computations increase dramatically over time.  Thanks to Big Compute, there’s no need to compromise.  Commodity compute is a non-starter for algorithms that are too complex to run quickly against large data sets.

Just imagine the opportunities we’d miss if we avoided problems that are seemingly too hard to solve.  Big Compute makes it possible to run the most complex algorithms quickly, and the sky’s the limit when it comes to the types of analytics we’ll see as a result.

Big Compute will help Big Data evolve to not just be “bigger,” but to be far more meaningful than we can ever imagine.