Hadoop Cluster in the Cloud: Snapshot a Supercomputer!


Written by: on April 16, 2014

What if you could quickly build a Hadoop cluster in the cloud, and then snapshot it for later use on demand?

Snapshotting VMs has become routine in cloud computing. A user can install their applications on the virtual machine, run some stuff, and then save it off to use it at a later time while only paying for the time used while the machine was turned on.

As HPC has become more mainstream over the last few years, there have been lots of experiments in the cloud for running workloads and orchestrating VMs to do the processing. Interestingly, as Nimbix has continued to add features to JARVICE, our HPC cloud platform, we discovered an interesting new capability: Snapshotting a supercomputer.

While the functionality is novel, what’s the benefit and use case? Well, imagine that you are a student or post-doctoral researcher who needs access to a certain class of supercomputing resources to get your work done? I know for some, grant proposals have to be written or budgets have to be scraped to pull together actual hardware to build the supercomputer. I recall my brother’s work as a post-doctoral chemical oceanographer from Texas A&M. He literally had to build his computing environment, which took him several weeks of working with hardware suppliers, getting machine specs, allocating funds, and building the environment before he could even start his science.

Building a Cloud Supercomputer

There is an alternative. Of course, for available cloud HPC applications, users can just submit the jobs to NACC, but with JARVICE this hypothetical researcher could construct a cloud supercomputer in minutes, complete with GPUs and Infiniband interconnect! Here’s an example:

{ "files": [], "application": { "parameters": { "USER_NAE": "my_cloud_supercomputer", "qsub-nodes": 32, "sub-commands": {} }, "name": "nae_16c32-2m2090-ib", "command": "start" }, "customer": { "username": "naccuserid", "email": "email@emailaddress.net", "api-key": "XXXXXXXXXXXXXXXXXXXX", "notifications": { "sms": {}, "email": { "email@emailaddress.net": { "messages": { } } } } }, "api-version": "2.1" }

The above command submitted to the Nimbix cloud would construct a 32-node (512-core) system with dual NVIDIA M2090 cards interconnected with 56Gbps FDR Infiniband. The system is provisioned almost instantaneously with one master (headnode and the rest as slave compute nodes) in the cluster.

Once the supercomputer is provisioned, the user can install their preferred workload management software, applications and other tools used to manage the cluster. After customizing the environment, the head node’s Nimbix Application Environment (NAE) can be saved or “snapshotted” for provisioning at a later time or cloning to build a second cloud supercomputer.

Building a Hadoop Cluster in the Cloud

I have personally been playing with this functionality to experiment with building a small Hadoop cluster. Since the functionality described above is currently available for advanced users willing to work with the API, I used our NACC CLI tool to submit an API call similar to the one above to build a 4-node cluster. I created a Nimbix Application Environment on a NAE_16C32-M2090 and installed Apache Hadoop with Infiniband RMDA support available from Ohio State. (http://hadoop-rdma.cse.ohio-state.edu). While I’m not a Hadoop cluster expert personally, I was amazed at how quick it was to provision with my installed stack. With minor initial configuration, I was ready to run some benchmarks like TestDFSIO on my cloud Hadoop cluster. After running the default benchmark, I found that with RDMA enabled, it ran almost 2x faster versus TCP. When I was finished, since I wasn’t going to come back to it for a few days, I ran a snapshot and simply terminated the Hadoop cluster with a mouse click and it was deprovisioned. I can now re-launch it later at any time for further benchmarking activities. Pretty cool!

This capability facilitates, but is not limited to a Hadoop cluster of course. We think there is a lot of potential when put in the hands of smart HPC users around the world. What kind of environments can we build? What kind of HPL benchmarks can be achieved? How much more efficient can we make researchers and data scientists? There are tremendous opportunities for accelerating innovation!

Impressions from the 52nd HPC User Forum


Written by: on April 15, 2014

Santa Fe, New Mexico - Site of the 52nd HPC User Forum

Last week I had the pleasure of attending the 52nd IDC HPC User Forum in Santa Fe, New Mexico.  Here are my impressions…

Industry Shake Up

IBM’s pending sale of its x86 server business to Lenovo represents a major change in the market share distribution among OEMs.  HP is set to be the sole leader in the space, with 3 players below it with about 15% share each.  IBM will focus its contributions around non-x86 hardware platforms as well as software.  Regardless, the x86 architecture will continue to dominate the market, and according to IDC, further increase its massive 80% share.

HPDA is Hot

High Performance Data Analysis, or IDC’s definition of Big Data meeting HPC, is heating up faster than any segment in the market.  Enterprise analytics will continue to pull HPC into mainstream commercial applications.  We are looking forward to delivering purpose built solutions that outperform traditional Enterprise infrastructure, and solve real problems.  IDC also predicts HPC growth as a whole will continue to outpace commodity Enterprise growth, as low end buyers are back into growth mode in 2014.

The New Arms Race

HPC is a major part of the economic race that has replaced the cold war arms race internationally.  With all this attention, however, HPC sites are now having to demonstrate ROI given the high cost of supercomputers (USD $200-$500 million in CAPEX alone), which leads to industrial partnerships.  The race to Exascale is dominating the international competition, with both bragging rights and real scientific progress on the line.  However, it’s a complicated problem that is starting to highlight software architecture as one of the major barriers.  Intel believes jobs that run as Exascale will have on the order of 1 billion(+) execution threads!  It’s no wonder coprocessors were used in 77% of sites in 2013, with NVIDIA leading the pack.  Despite remaining barriers in skills and applications, purchasing intent is strong for NVIDIA GpGPUs and Intel Phi, with FPGA in 3rd place.

Cloud HPC is Growing

IDC expects the public cloud for HPC to experience steady growth.  In 2013, 23.5% of HPC workloads ran in the public cloud, up from 13.8% in 2011.  At Nimbix, we see this as a result of the emphasis on software architectures, to handle both scale and cost efficiencies.  While the HPC User Forum marveled at the centers scaling jobs to thousands of cores, the “Missing Middle” continued to be in the spotlight.  For example, while one lab discussed running a 16,000 core Ansys Fluent job, the majority of all such jobs run painfully on 8 core desktop computers out of necessity.  The market is looking to the cloud to address the 128-512 core requirements most users have for these jobs.

Industrial Engagement

A major theme at the HPC User Forum was the importance of industrial engagement for centers, as a key to long term sustainability.  Unfortunately, many obstacles separate academia and industry, including cultural issues, staffing, confidentiality, etc.  Centers need major improvements in project management and service delivery practices.  We heard good examples of public/private partnerships from NCSA and Los Alamos National Laboratory.  The model works best when problems can be solved for multiple customers, leveraging both consulting and cycles as the keys to success.

The Coolest Tech Presented at the HPC User Forum

My vote goes to the two-phase immersion cooling from 3M and SGI, which can reduce energy cost by 95%.  3M’s Novec™ Engineered Fluid, which boils at a balmy 49 degrees centigrade, replaces water for cooling SGI’s systems.  Its low boiling point keeps immersed boards cool, and is recycled for constant use.

IDC’s 52nd HPC User Forum was a meeting of the minds of the space, with a watchful eye toward the future.  Mega Trends like Big Data/Analytics and Cloud Computing continue to broaden the adoption of this amazing technology to solve real problems faster than ever before.

5 Reasons to Move Your Amazon AMI to a Nimbix NAE


Written by: on April 8, 2014

Moving from AMI to NAE

If you have been a consumer of cloud computing for any length of time, you probably know what an AMI is. The Amazon AMI (Amazon Machine Image) defines a virtual machine with its associated operating system and application software integrated as a complete image. A user can launch “instances” using these AMIs to run their applications in the cloud.

The Nimbix NAE (Nimbix Application Environment) defines the operating system and application software as a complete image, much like the AMI. These “environments” can be provisioned onto any number of hardware platforms to run applications in the cloud.

So what’s the difference besides just another acronym? The difference comes down to the kind of problems you are trying to solve. Most public clouds were originally offered to handle light weight web service applications. But what if your applications require more hardware performance? What if they need GPUs for real-time rendering or to perform parallel calculations at bare metal speeds? Do your cloud cluster workloads suffer from performance bottlenecks? If you have been running your high performance or batch applications in AMIs on AWS, here are 5 reasons you might consider implementing them in a Nimbix NAE:

Performance

Most public clouds are built using virtualization technology. This means that a software hypervisor sits between the actual server hardware and your application. While the “hypervisor tax” in a lab environment is not terrible, performance of a virtual machine will always be slower than its bare metal counterpart. However, when running in a real world public cloud environment, where physical servers may be heavily subscribed, performance of an application may vary widely. This may be tolerable for light-weight web service applications, but when running high performance applications for data processing (whether transcoding videos or running engineering simulations), this penalty can add up. If a user is paying by the hour, slower performance means longer run times, and longer runtimes equate to higher cloud costs.

Lower Cost

It may be tempting to compare instance hourly pricing to machine hour pricing as apples to apples, however, this is not a fair comparison. Physical cores attached to physical memory will indeed cost more than virtual cores attached to virtual memory on an hourly basis. Ultimately, the best way to compare cloud costs is to look at the cost per workload. Users must also consider other less obvious costs associated with AMIs, such as EBS volumes, data transfer, I/O charges, etc. The Nimbix NAE is billed to the minute, delivering better granularity while also providing an “all in” cost on total cloud usage. The net result is overall lower spending and higher performance for HPC workloads.

Batch Ready

Nimbix NAEs come ready to run as batch applications. This greatly simplifies interaction with the Nimbix cloud since NAEs can be provisioned and de-provisioned automatically as a processing task. While AMIs can be turned on and off, NAEs can simply be “run.” When your application exits after processing your data, the NAE is de-provisioned billing stops.

Parallel Ready

For high performance computing users, many applications may require a single processing task to run across a cluster. The Nimbix NAE is by design a parallel processing environment. A simple API call will create an N node cluster where each node is communicating via ultra-low latency Infiniband interconnect. This delivers higher throughput for processing and dramatically eases the deployment of HPC applications in a cloud environment.

Human Interaction

While self-service is a key characteristic of any public cloud, we don’t have to take the human completely out of the machine. Sometimes, great solutions are engineered when people talk to people. This is particularly important on a metered service that is pay per use. In using an NAE, the Nimbix team is here to help with questions while maintaining high levels of human support.

In summary, for high performance computing or high performance data analytics applications, Nimbix NAEs provide a compelling cloud alternative to Amazon AMIs for batch processing. Whether you wish to boost performance, reduce your monthly cloud bill, get more human support, or build high performance cloud clusters, Nimbix NAEs are certainly worth a test drive.

5 Cool Supercomputer Names


Written by: on April 1, 2014

JARVICE

Supercomputer names are as much a part of High Performance Computing as the hardware and software itself.  You just can’t build one and not name it!

What Are Supercomputer Names All About?

To be memorable, supercomputer names should be single words, evoking something powerful and significant from the outside world.  Most of these systems are one of a kind, permanently installed in some specific site.  A unique name adds the human touch to an otherwise robotic stack of cores, GPUs, cables, and software.  Here’s our take on 5 cool supercomputer names and the story behind them…

Hopper

Hopper (supercomputer names)

While not quite as fast as its cousin Titan, its name honors one of the greatest figures in computing history: Admiral Grace Hopper.  She was a true pioneer in the field, developing the first compiler in history, among many other significant achievements.  Admiral Hopper also introduced us to one of the most endearing (and dreaded) terms of all time: the computer “bug”.  In 1947, her associates discovered a moth stuck in a relay of her Mark II computer.  Even though we no longer use mechanical parts in our systems, the name obviously stuck.  Remains of this infamous moth can be found at the Smithsonian Institution’s National Museum of American History.

For its part, Hopper (the supercomputer) was ranked 28th in the world at the end of 2013.  It’s no slouch, and no doubt owes a lot to the amazing lady it’s named after.

Stampede

Stampede (supercomputer names)

While not quite the coolest supercomputer name in Texas (more about that later), it’s still worth mentioning.  At over 5 PetaFLOPS, it was ranked 7th in the world at the end of 2013.  Stampede reminds us of the power of a thundering herd of longhorns charging in unison, leaving tremors and dust in their wake.  That’s about right for a supercomputer that lives Austin, Texas.

Vulcan

Vulcan (supercomputer names)

Does this supercomputer name refer to the Roman god of fire, or to the great planet Mr. Spock hails from?  Since it was built by geeks, it might just be the latter!  Vulcan cranks out over 4 PetaFLOPS at the Lawrence Livermore National Laboratory, and was ranked #9 in the world at the end of 2013.

Pangea

Pangea (supercomputer names)

Computing legends, herds of angry cattle, gods – why not supercontinents?  Pangea (the continent) once encompassed all the land on Earth in one contiguous mass.  This meant you could walk from Alaska to Australia if you really needed the exercise.  It began to break apart 200 million years ago and gave birth to the 7 continents we have today – tiny pieces of a giant jigsaw puzzle in comparison.

Pangea (the supercomputer) is a 2 PetaFLOPS beast ranked 14th in the world at the end of 2013.

JARVICE

JARVICE from Nimbix

Last but certainly not least, there’s JARVICE.  This is the other Texas beast, not yet ranked, but certainly with the coolest supercomputer name of all.  Its mission is to change the cloud from a bunch of virtual machines to a streamlined processing system that just takes your data and gives you back the results.  There’s no need to worry about “spinning up” or shutting down “instances” or anything else that would get in the way of your computing tasks.  Just feed it your problem, and out comes the solution.  All you pay for is what you use.

JARVICE is an acronym for Just Applications Running Vigorously In a Cloud Environment, and was inspired by Tony Stark’s JARVIS system in Iron Man (sort of his private Watson).  Additionally, in the movie the Avengers, Mr. Stark brushes of the complexity of solving a large computational problem by simply offering to push it down to the “Homer” machine’s 600 TeraFLOPS for processing.  Legend says that when Nimbix co-founder and CEO Steve Hebert saw the movie, he practically jumped out of his seat yelling “that’s Nimbix!” in a crowded movie theater.  Shouldn’t all computers work that way?  Have a huge problem to solve?  No problem really, just send it the data, and wait for the solution to come back.

Other than JARVICE, what do you think are the coolest supercomputer names out there?

How Much do 120 teraFLOPS Cost?


Written by: on March 25, 2014

Can you win the Nobel Prize with 120 teraFLOPS?

Last week, Southern Methodist University in Dallas, Texas unveiled the “ManeFrame” – a 120 teraFLOPS supercomputer valued at $6.5 million.  Previously named Mana and stationed at the Maui High Performance Computing Center (MHPCC), this system made the top 500 list of worldwide supercomputer sites as recently as the end of 2012.  Needless to say it’s a very exclusive list that’s really difficult to earn a spot on.

SMU is no stranger to supercomputing – its existing $14 million datacenter has already helped cancer drug testing and physics, including research leading to the Higgs boson discovery.  This in turn helped earn François Englert and Peter W. Higgs the 2013 Nobel Prize in Physics.

Equally amazing was the price SMU paid for the ManeFrame – just $50,000.  The U.S. Navy graciously asked only for shipping costs in exchange for this incredible machine.  This is strong recognition for SMU’s contributions to science and will no doubt help the university continue to advance the greater good.  As proud members of the High Performance Computing community here in Dallas, we at Nimbix congratulate SMU for this exciting new addition to the family.

What if this type of computational power was available to anyone on demand, and how much would it cost?  Recently we wrote about HPC on demand enabling innovation for the masses.  The more people who can get their hands on this type of capacity, the more breakthroughs we are likely to see and benefit from as a result.  In fact, even a fraction of this power is generally enough to solve many of the toughest problems we can think of.  Most supercomputers, just like computing clouds, are multi-tenant.  It’s unlikely that a single user can consume the entire system at any given time.

At $6.5 million, ManeFrame’s equipment alone costs over $54,000 per teraFLOPS!  An example electromagnetic simulation job consuming 7.8 teraFLOPS and running for 12 hours would require a $600 “chunk”.  And that’s not the half of it (literally).  It’s no accident that SMU’s datacenter is valued at $14 million.  Supercomputers are hot blooded beasts with an insatiable thirst for cooling, electricity, and computing skills (read: humans) to keep running.  If you were to assemble one of these yourself, you would need to either build or lease a datacenter large (and cool) enough to fit it in.  Then there’s the electric bill.  It’s neither cheap nor a “solo project”.

Back to our HPC on demand idea… why not pay only for what you need, when you need it?  Using our JARVICE supercomputer, we can deliver 120 teraFLOPS for less than $200 per hour.  The same electromagnetic simulation costing close to $600 on the ManeFrame (in slice of CAPEX cost alone) can be yours for only $156 total on the Nimbix cloud.  And unlike the ManeFrame, it shuts off automatically when it’s done, so you don’t keep paying even when it’s idle.  The $156 includes space, cooling, power, and humans to help make sure it runs smoothly as well.  Don’t forget that you can’t just “slice off” $600 of ManeFrame – you still have to invest $6.5 million, plus the operating costs.  With JARVICE, the $156 is the total amount you pay for the example job with no other strings attached.

How can we do this?  First, JARVICE is a bit more modern than ManeFrame, and utilizes the latest GPUs.  This means much higher core density and considerably lower overall costs to operate.  JARVICE is “green”, saving not only energy, but also money, which we pass directly onto you.  Second, JARVICE is open to the public and constantly in use, running many different kinds of jobs at once.  Privately accessible supercomputers face the challenge of staying utilized enough to make their behemoth costs worthwhile.  JARVICE, on the other hand, never rests.  This drives its marginal cost down low enough for us to make it available to you at such an attractive price.

ManeFrame’s 120 teraFLOPS join a proud program at SMU that ultimately helped two scientists win the Nobel Prize.  What can you do with 120 teraFLOPS of HPC on demand?  We can’t wait to find out!

Observations from IDC Directions 2014


Written by: on March 18, 2014

Santa Clara Convention Center

Although I’ve spent many years traveling back and forth to Silicon Valley from my days in the semiconductor industry, on my most recent trip I was able to spend an evening and a day in my old college town of Santa Clara, CA. I was there to attend IDC Directions, the 2014 installment of an annual event covering the current and future state of the IT and computer industry. The theme of this year’s event was Transformation Everywhere: Battles for Leadership in the 3rd Platform Era.

The 3rd Platform describes the current period following the several decades since the beginning of computing: Mainframes and Terminals, the 1st Platform and Client/Server, the 2nd Platform. The 3rd platform is built on the 4 pillars of Mobile, Cloud, Big Data and Social Technologies. The foundation was laid for the 3rd platform with the first shipments of iPhones, which is also about the same time the market started to embrace modern cloud computing. This powerful combination of scalable computing with mobile technologies has been transforming society and industries at a breathtaking pace. We are innovating at a scale and speed the world has never seen before.

One of the particularly interesting themes from Frank Gens morning talk at IDC Directions was data as the “new gravity” in computing. Rather than orienting data to the computers, with cloud, we can orient computers to the data. This is driving change in the way we develop and deploy applications, ushering in cloud-centric models for to replace aging methodologies. According to IDC, there will be a 10x growth in new cloud apps worth about $20B by 2017. These apps are being created by close to 18 million professional and hobbyist developers, swelling the cloud developer community by 3x over the next few years.

And, while many believe the above all leads to the commoditization of cloud infrastructure, Frank contends that this is not the case. Instead, he argues, there will be innovation in infrastructure to cater to the myriads of specialized workloads that will be running in the cloud.

As cloud providers, then, we must continue to invest in growing not just scale, but also platform capabilities that simplify deploying powerful cloud applications. In our case at Nimbix, we built our JARVICE platform to support three primary workflow components in high performance computing: Build, Compute, and Visualize.

Build functionality enables software and application developers to implement and capture their environments in the Nimbix Accelerated Compute Cloud. Compute is the run-time API call that lets users and consumers simply “execute” applications, while Visualize avails 3D graphical environments for data interpretation and post processing.

The result of the JARVICE architecture is a cloud platform that revolves around data as the new gravity. It enables unique heterogeneous hardware bringing new tools to developers, and a framework that makes it easy to both develop and consume high performance, parallel computing applications in the cloud.

So as we continue to warp through the 3rd Platform era in computing, JARVICE is leading the way for high performance cloud application developers. Perhaps at a not-so-distant future IDC Directions conference, we will all be talking about the 4th platform era, whatever that may be.

3D Workstation in the Cloud!


Written by: on March 11, 2014

3D Workstation in the Cloud!

As a follow-up to our recent announcement, here is a video demonstration of a JARVICE 3D workstation running on the Nimbix Accelerated Compute Cloud:

JARVICE 3D NAEs (Nimbix Application Environments) can be used to install and/or develop applications, and saved for future use.

Connecting to a JARVICE 3D Workstation

If you are using a Windows PC, Mac OS X, or Linux client, you should download TigerVNC to connect.  This gives you the best performance and security, and supports advanced features such as remote window resize and full-screen support for multiple monitors.  With TigerVNC, your connection to the 3D Workstation is secured with strong encryption automatically.  This way all keystrokes are transmitted securely and intruders cannot guess passwords or other sensitive information as you type it.

If you are using a platform where you cannot run TigerVNC, you can connect with any VNC client.  For best results, choose a client that supports “Tight” encoding, if available for your platform.  If the client prompts you for screen number, always enter 1.  If instead it prompts you for port number, always enter 5901.  If it does not explicitly prompt you, append :1 to the end of the IP address shown in the NACC portal “Connect” window.

Additional 3D Workstation Capabilities

A JARVICE 3D Workstation can also run OpenGL applications and computation (e.g. CUDA) at the same time on the GPU.  This allows you to run visualization/design software as well as solvers in the same environment!

Build Cloud HPC Applications in Under 15 Minutes!


Written by: on February 25, 2014

Building and deploying complete HPC applications to the cloud using the JARVICE platform is both quick and easy.  This video demonstrates development, deployment, and batch execution of a functional High Performance Computing application from the ground up in less than 15 minutes.

You can control the JARVICE platform either through the NACC portal web GUI, or the NACC API using an open source, cross-platform command line tool called NACC CLI.  This “best of both worlds” approach delivers point and click simplicity when you want it, or powerful scripting and automation capabilities when you need them.  Once HPC applications are onboard the Nimbix cloud, it’s easy to execute them in batch at whatever scale you need.  JARVICE also allows you to update your code whenever you like, in a fully self-service way.

Additional Resources for HPC Applications

Software Licensing in the Cloud


Written by: on February 18, 2014

Software Licensing in the Cloud

Many dream of a world where all software is free (as in speech and as in beer!).  In the real world, however, software licensing is not leaving us any time soon.  Companies spend enormous resources researching, developing, and supporting high quality applications, and must recover that investment.  As an end user, you get to decide if there is sufficient value in paying a vendor to use their product, or search for a free alternative that suits your needs.  Once you’ve made the decision to go commercial, software licensing becomes a key consideration in present and future use of your applications.

Before use, however, comes deployment.  If you’ve been paying attention, chances are you are trying to take advantage of cloud computing.  We all know the benefits by now.  But how does this paradigm fit with traditional software licensing and the industry’s strong resistance against change?  Let’s take a look at 3 different methods and how they work in the context of cloud computing…

Perpetual Software Licensing

This type of software licensing entitles you to use an application with certain terms and conditions for as long as you wish, in exchange for the full cost up front.  Restrictions vary, but typically cover who (or how many) may use the software.  Most vendors require a yearly fee to provide technical support and upgrades, but you can continue using the application even if you decide to stop paying for this.

While perpetual software licensing is very common and easy to understand, it’s the least friendly to cloud computing.  If you’re an end user subscribing to Infrastructure-as-a-Service, you basically have to deal with two vendors (your cloud vendor, and your application vendor).  What’s more, even if you only use the infrastructure part time (which is a major reason why you chose cloud computing to begin with), you still need to buy a full copy of the application.

If you’re a cloud service provider looking to deliver perpetually licensed applications in a Software-as-a-Service fashion, you must capitalize the product up front.  Calculating ROI on this model is both complex and full of uncertainty, requiring a bit of a “leap of faith”.  The bottom line is your customers are looking to pay for use, and you in turn must pay vendors for what is essentially inventory.  This is not an ideal situation!

Subscription Software Licensing

A friendlier way to license software for cloud computing applications is by subscription.  There are different styles within this genre of software licensing, but typically vendors charge for peak current users or devices accessing their applications.  Cloud service providers naturally prefer to pay in arrears, since that’s how they bill their customers.  This approach involves end users consuming applications during a period of time, and service providers reporting (and paying for) metered usage back to the software vendors.

More and more traditional software companies are embracing subscription licensing.  Even Microsoft’s new CEO Satya Nadella famously changed the sales compensation model for Office (perpetual) to give equal weight to Office 365 (subscription).

Typically, over some period of time (1-3 years depending on vendor and application), subscription software licensing costs more than its perpetual counterpart.  But end users enjoy peace of mind knowing they are always using a fully supported, fully up to date copy of their favorite applications, without the hefty up-front investment.

If your application vendor is your cloud service provider, then subscription software licensing is quite reasonable.  But for cloud service providers who deliver subscription licensing in a pay per use model (as most cloud computing consumers demand), the subscription periods are still way too long.  End users consume cloud applications by the hour, or even minute, which is vastly more granular than the monthly term most subscription software licensing dictates.

Pay-per-Use Software Licensing

“Nirvana” for cloud service providers and consumers alike is pay-per-use software licensing.  In this model, software vendors are paid in arrears for the exact metered usage of their products.  Consumers pay one price, which includes both infrastructure and applications.  Cloud service providers use the same model to monetize their entire stack without having to predict any more than they already have to.  Everybody wins.

Of course, traditional software vendors are very protective of massive Enterprise License Agreements that lock customers into yearly commitments for products and services.  Pay-per-use is disruptive to this model, although over time is sure to produce more revenue.  Just as infrastructure costs more to rent than to buy for full time use, so does software.  Once vendors start embracing this and adapting, one of the major barriers to cloud adoption (software licensing) will finally fall.  We can’t wait!

No matter how you pay your software vendors for the right to use their products, make sure you have a partner who can help you make sense of it all.  Nimbix offers cloud computing solutions for all software license models, whether you are an ISV, end user, or developer.

Cloud Computing: 5 Things You May Not Know


Written by: on February 11, 2014

Cloud computing is all about people quietly “spinning up” instances in some giant network on the Internet to solve all problems, right?  Guess again!

Cloud Computing is Not New!

While many would argue the modern incarnation of cloud computing is only a few years old, there’s no denying that the conceptual and practical underpinnings date back at least to the early 1960’s.

Things we take for granted today as “inherent” in cloud computing, such as resource pooling (first implemented as time sharing), measured service, and rapid elasticity are decades old.  To say this is not new is a massive understatement – cloud computing is actually ancient by modern standards.

There’s no Such Thing as “The Cloud”

We’ve all seen advertising campaigns shouting slogans like to the Cloud!”, leading us to believe in some singular omnipotent information and processing system in the sky.  Even if we limit our view to just public cloud, this is far from true.    There’s the Google cloud, the Microsoft cloud, the Salesforce.com cloud, and let’s not forget the Amazon cloud.  Yes, there’s the Nimbix cloud too for when you need High Performance Computing as a service!

The cloud is actually multiple clouds

If the last 3 decades of personal and business computing was about applications, the next 3 will be about cloud aggregation.  A whole new segment of cloud computing products and services emerged just to help businesses tackle this.  Gartner calls this Cloud Services Brokerage (CSB).

As consumers, our mobile devices and PC’s are more about receiving, presenting, and modifying information from multiple clouds than about actually running applications.  In fact most desktop and mobile applications today are simply cloud “receivers.”  It’s no wonder we are seeing such diversity in operating system adoption as compared to, say, in the 1990’s – platform affinity is more and more a matter of preference now, not application support.

Cloud Computing is not About “Instances”

Many people mistakenly associate cloud computing with virtualization, and given commodity public Infrastructure-as-a-Service’s popularity, “instances”.  It’s ironic that the lowest common denominator in cloud services is so much a part of technology culture that even some professionals think that’s all there is to it.

Cloud Computing is not all about instances!

Giving an end user an ephemeral virtual machine “instance” certainly serves some purposes, but it doesn’t really solve any problems.  Higher value platforms and applications are what solve problems – many of which don’t even leverage commonly understood virtualization techniques due to their overhead and cost.  Recently we learned about the anatomy of HPC Cloud, for example, where virtualization is generally absent.  But there is also email, collaboration, and customer relationship management (CRM), to name a few examples of cloud computing applications and platforms that deliver real solutions without offering or even leveraging “instances”.

Cloud Computing is “Noisy”

Well, chances are you already know about this one.  Pundits and “strategists” bicker about Open Stack versus Cloud Stack versus Azure.  Developers argue about API’s.  Microsoft claims you are “Scroogled”.  The list goes on and on.

Cloud Computing is riddled with noise!

The most powerful tool we have as end users is not on our PC or mobile device, but rather in between our ears – specifically, the ability to ignore noise and focus on what’s important to us.

When choosing cloud computing infrastructure, architecture, or providers, what matters most is that our problems are solved.  First we have to actually articulate our problems and decide what’s really important.  Is portability between clouds important (if so, why)?  What about privacy and data protection versus cost and ease of use?  You get the picture.  Solutions become easy choices once we’ve identified and prioritized our needs.

Cloud Computing does not Solve Everything

This should be pretty obvious as well.  Technologists tend to fall in love with Megatrends, which is often like missing the forest for the trees.  The end result can be wasted time, effort, and money.

Cloud computing is not the answer to every problem!

There are many problems cloud computing solves well.  There are others that it can also solve, but at higher cost and lesser quality than traditional infrastructure.  Before jumping “into the clouds”, remember that like anything else, cloud computing is about solving problems, not perpetuating technology or advancing a “cause”.