What’s Old is New Again, is a song from the 1979 release of the film, All That Jazz. Now, why on earth would I be referencing a movie from the time when the BeeGees were topping the charts, disco was relevant, KISS wore makeup, and if you lived in the New York City area, Studio 54 was the place to be with Andy Warhol, Debbie Harry, and Grace Jones? It was at this moment in time, the programmable logic device came into existence — the PLD.
Ten years later, your jacket had big shoulder pads, Miami Vice was on television, MTV played music videos, and you had at least one friend with Flock of Seagulls hair, Xilinx, Altera, and others, brought us the field programmable gate array, the FPGA. The FPGA was first thought of as a cost saving device for application specific integrated circuits, ASICs. The business case being that it was cheaper to take a generic FPGA and program it to your specific task than to manufacture a chip specifically for your task. This was all well and good, especially for specialized or low volume/niche applications. It even made economic sense when you considered that, instead of putting a physical co-processor on your board, you could simply embed the gate logic for the co-processor onto the FPGA, now you have everything in one place, with far much less risk and expense than masking out ASICs. Early adopters were telecom switch manufacturers, military applications and other specialty makers. At the time, it was a niche product that addressed a particular market segment.
Fast forward thirty odd years, MTV no longer plays music videos, Sonny Crockett’s white Testarossa now is cool because it’s now insurable as a classic car, much like your grandfather’s ’57 Chevy or Lil’ Deuce Coupe, and you hear Van Halen (with Sammy Hagar!) on the classic rock/oldies channel. It is true, times have changed! FPGAs are now cool, really cool, and not in a retro-ironic way, like your old Atari 2600, or Jarts Lawn Darts, but in a manner to where they are being identified as one of the go-to technologies for handling the large amounts of data. Here’s why: CPU speed hasn’t kept up with data growth and now the only way to process the huge amounts of data required for a variety of problems in a timely manner is hardware acceleration. Algorithms that do not lend themselves to GPU acceleration can be accelerated with FPGAs by putting the algorithm physically on the chip rather than in software. This allows algorithmic optimization not possible within software to be conducted in hardware directly.
If we accept the notion that Moore’s Law is, if not dead, is on life support because we are rapidly approaching the physical limit of miniaturization in chip manufacturing; and we accept the notion that the amount of data that needs to be processed on a daily basis will continue to grow at a rate that approximates exponential; we can see that we have two choices, throw more processors at the problem (assuming it scales) or solve the problem in silico. To solve your problem in silico and actually have a durable solution means the hardware now has to be able to be reconfigurable to address the requirement for an upgrade path with regards to the algorithms on the physical chip. This problem, and others like it, are in the area of reconfigurable computing, in A Reconfigurable Design Framework for FPGA Adaptive Computing, Ming Liu and collaborators at the 2009 International Conference on Reconfigurable Computing and FPGAs, examined the idea of hardware that could be reconfigured on-the-fly to address processing demands. This area of scientific enquiry has continued to mature with several books being published on the topic.
If we take a step back and look at the idea of reconfigurable computing, we can see the parallel between reconfigurable computing and deep learning. In both cases, we have a system that is modifying itself in response to some form of input. Nimbix is built on this very notion. You could actually think of Nimbix, in a rather broad context, as a cloud version (a macro version) of an FPGA. The Nimbix cloud was invented, as a device, in total, as a reconfigurable piece of hardware that automatically adjusts to the input that it is provided in terms of hardware and network configuration from a pool of heterogeneous machines.
The heterogeneous nature of the Nimix cloud includes FPGAs as items that can be selected by a user to flash with their codes, or to develop codes upon. Additionally, the presents of FPGAs are not limited to stand alone use, they can be used as accelerators, even in conjunction with GPU accelerated applications, one such application is a convolutional neural network that trains an FPGA to take advantage of the FPGA’s de facto low latency in processing data. We could see an architecture like this being used for cases where the lag between training, deployment, and execution needs to be very short, self-driving cars are one such example, or any sort of AI application where the ability to rapidly adapt and evolve the intelligence is required, we could even see applications for this scenario in manufacturing conducted by AI enabled robots.
Taking another step back all of this heterogeneous hardware working together efficiently and effectively requires a tremendous amount of infrastructure to make it work. Most enterprises do not have the expertise or wherewithal to stand up a system that can enable a CPU, GPU, FPGA system simply for development, test, and exploratory scenarios, it is simply too resource intensive.
This is where Nimbix comes in. We already have state of the art CPUs, GPUs, and FPGAs available for on demand use all running on a cutting edge high performance back bone with extraordinarily fast storage able to keep the GPU’s full and allow the user to realize the full benefit of the FPGA’s reduced latency.
So, what’s old is new again, the idea of on demand computing from a third-party vendor is not a new idea, as time sharing computers was routine for banks and oil companies of the 1960s. The change in competitive landscape, along with the advent of cloud enabling technology has now allowed us to come back full circle and just sell you what you need rather than sell you the whole infrastructure.
All of this is making me feel rather nostalgic, I think I’ll go find my pet rock, dust off my lava lamp, and break out the fondue pot for dinner.