The AI Revolution is in full swing. We are lucky to see a lot of activity in the AI Revolution first-hand as the premier cloud provider for accelerated computational resources. 2016 has been an incredible year for machine learning. We have seen powerful frameworks continue to mature. The next generation of GPUs, crafted and designed by NVIDIA for the purpose of accelerating deep learning pipelines, have been released and are available publicly in the cloud. And the quantity of data continues to explode far beyond the capabilities of our incredible computing resources of today. Machine learning and deep learning were also among some of the hottest topics at Supercomputing 2016.
The three biggest AI trends that we have seen are:
The top frameworks are maturing rapidly. Caffe, Torch, and Tensorflow are the three most popular frameworks that we have seen in both academia and industry. Our users have built everything from simple on-demand web services using the JARVICE API to entire businesses based around these powerful machine learning frameworks. Tensorflow’s architecture departs from the traditional deep learning mold and provides really powerful mechanisms for different types of training and inference architectures. These three frameworks are on the verge of being able to support distributed GPU computations out of the box, and this should drive some powerful innovation in 2017 by being able to leverage technologies such as NVLink and Infiniband RDMA.
GPUs and accelerators such as FPGAs are ushering in massively parallel computational capabilities. In 2016, NVIDIA released its new Pascal architecture GPUs, the NVIDIA Tesla P100s. Nimbix has IBM Power8 machines with 4 x P100s available on-demand connected with NVLink for inter-GPU communication, and FDR Infiniband for distributed communication across machines. Our library of on-demand, turn-key applications for Minsky+NVIDIA P100s includes Power AI and NVIDIA DIGITS, and custom applications can be deployed with a simple git push to Github. The NVIDIA P100s are optimized for convolutional neural networks and matrix multiplication with better throughput and memory performance than ever before. 2017 will have a lot of exciting development in hardware acceleration. Much like Google’s Tensorflow Processing Unit (TPU), which is a CPLD, specialized FPGAs in the cloud make it possible to iterate faster than ever by designing specialized logic to accelerate challenging deep learning problems.
3. Massive Data
Data sets are continuing to grow, and almost every industry wants a bite of the AI Revolution apple. As any newcomer to this space will quickly learn, data has mass and moving it around can easily be the bottleneck of your algorithm. New architectures are being envisioned to address this in innovative ways. As the world becomes more connected with IoT sensors of all shapes and sizes, real-time data processing architectures will continue to grow to process and respond to this data more effectively. Machine learning experts will focus on developing algorithms to train more effectively on less data while computational capabilities continue to improve.
We can’t wait to see the innovations in machine learning, deep learning, and artificial intelligence in 2017. What trends have you noticed in AI and machine learning throughout 2016 and what are your predictions for 2017? Tweet us @nimbix!