What is Artificial Intelligence?


What is AI?

HPC Basics Series

Artificial intelligence (AI) is really nothing new, it is the ability for a machine to “sense, comprehend, and act”, according to this Accenture publication.  The real use for AI is in wading through the increasing volumes of data that are being generated on a daily basis and automating responses to signals from that data.  Let’s look at one of the first commercial uses for fuzzy logic, a form of AI, the fuzzy logic rice cooker by Zojirushi.  You select your type of rice, you put in water, set it and forget it.  The rice cooker has sensors that monitor temperature and humidity and adjusts the temperature and cook time accordingly, ensuring well-cooked rice.  What it’s really doing is automating and adjusting the cooking process of a food product that has been cooked, time and again, for thousands of years.  In short, people know how to cook rice, the Zojirushi product has a model for cooking rice properly and automating the rice-making process.

Learning is the real power and downfall of AI

In most cases, AI is trained on data sets that have been assembled to replicate a truth, a calculable entrance requirement to a labeled set or category.  The entrance requirement can be a set of metadata that is weighted to achieve a score which determines the entrance to a particular category.  We see this process go humorously wrong with toddlers when they are learning to speak.  For example, little Freddy is 10 months old and he calls the family dog, Rover, ‘doggie’.  Rover has four legs, a tail, and fur.  On a day out with the family, Freddy sees a horse for the first time, points to it and says, “doggie”.  Freddy just had a false positive because he had never seen a horse before and defaulted to the label he knew for things with four legs, a tail, and fur.  In short, much like toddlers, AI is only as accurate as the training (experiences) they have been given.  It is in training the models for an AI where the vast majority of compute power is used.  Luckily, these types of computations can be accelerated by specialized hardware.  

If we accept the fact that artificial intelligence is topology bound (this is the notion of narrow artificial intelligence), this means as we to get closer to the truth (whatever that is), every set of data can be categorized against a number of different classification topologies, we can call these different topologies “facets.”  This is where supercomputers shine.  Instead of attempting to classify against a single entrance requirement, multiple AI’s can be trained against multiple entrance requirements that represent different potential semantic realities.  For example, if the requirement is to classify types of ‘blues’, one logical set might be to name colors, (navy blue, sky blue, baby blue, …), a second might be musical genres (Delta blues, Chicago blues, Texas electric blues, …), and a third might have to do with Major League Baseball teams and players (Toronto Blue Jays, Vida Blue, …).  The result is, once the facet space has been identified and defined, a more whole or full AI solution can be generated.  So, if something as simple as ‘blue’ requires at least three fully trained classifiers, more complex search spaces will require many more.  This expansive requirement means one thing, more compute time for training.  This is why supercomputers are a natural fit; faster compute means more facets can be trained per unit time.  More facets trained means more robust and complete AI coverage.  With better semantic coverage, you are less likely to have your AI pointing to a “horse” and calling it “doggie.”

Nimbix provides fast compute for AI applications

In all cases of machine learning, model training is conducted with data that is in the form of a  matrix.  This allows Nimbix to apply vector-based acceleration by using one or several GPUs to accelerate these calculations.  Also, a high-speed interconnection fabric and very high throughput storage enable these GPUs to always be full of the data they are processing, and a high-speed interconnection fabric also enables GPUs to communicate with one another without having to go back to the CPU.  Specialized hardware, high-speed interconnectivity, and high throughput storage are elements of the Nimbix cloud that enable it to be eye-watering fast and an excellent choice for training models for AIs.