As we prepare for a new year in the world of HPC cloud, I find it is always good to spend some time reflecting on progress from the prior year and the implications for the next.
I think many who have spent time using or experimenting with HPC applications or workflows on public cloud resources would agree that steady progress has been made in 2012. While many challenges remain around data movement, security, software licensing and ease of use, we’ve all learned more about what it takes to be successful getting processing work done in the more efficient ways. For this post, I summarize a few of my own observations from 2012 and then make some predictions for 2013.
To keep things simple, I’ll just list them out with some commentary:
Observations for HPC Cloud in 2012:
- Early large scale HPC cloud deployments with open source software applications – Open source software is still dominating cloud-use cases, although many commercial software organizations will deploy more formal cloud strategies in 2013 (see below).
- Data challenges – There are really two big issues associated with HPC data and public clouds. One is the inherent challenge of transferring and storing (even if temporary) large data sets and the other is data security. It’s no surprise that the early trail blazers in HPC cloud use cases are in segments where data sets are public or have less restrictive security requirements.
- Cloud costs still lack commercial-grade clarity – What I mean here is that most users still don’t have a clear picture of how much their cloud-utility bill will be on a monthly basis. There are a few cloud expense management platforms emerging, but the picture is still fuzzy for enterprise HPC computing.
- Cloud standards maturing – While standards are still shaping the cloud infrastructure industry as a whole, much of the standards debate is centered around the machine stack and provisioning versus workloads and applications. I expect we will see more drift into applications in the future.
Predictions for HPC Cloud in 2013:
- Users and Cloud Providers will add more network bandwidth and data transport acceleration (such as Aspera) to reduce the time to move large data between compute resources
- There will be increased use and deployment of data encryption technology which will continue to reduce barriers to cloud adoption
- Cloud provider offerings will center more around workloads, applications and processing pipelines versus pure infrastructure
- Mid-size and large organizations will migrate toward hybrid private/public infrastructure to optimize economics and monthly spend
- Leading HPC ISVs will provide more options and licensing flexibility for cloud enablement
- Accelerated platforms and larger memory machines will continue to gain traction in public clouds
- We will begin to see more sophisticated tools for cloud processing and workflow automation
So while there is probably nothing earth shattering in the above observations I think it’s important to understand the themes that emerge. Those themes help shape our collective focus for solving problems in the next year and years to follow. They help us discern the best standards and cloud deployment models, and finally those observations and themes can help make smarter business decisions.