Share this:


If you’ve followed cloud computing lately, chances are you’ve stumbled upon containers as an alternative to virtualization.  Popular technologies such as JARVICE and docker are making a major impact.

What are Containers?

At a high level, containers are an application-centric way to virtualize workloads far more efficiently than traditional hypervisor technology like the type found in commodity cloud Infrastructure as a Service.  Modern operating systems (Linux, Windows, etc.) are made up of 2 basic parts: kernel space and user space.  Kernel space, as its name implies, is home to the operating system kernel, or the low level instructions that boot the machine, control hardware, provide subsystems (e.g. networking, storage, etc.), and schedule tasks.  Tasks (processes, threads, etc.) run in user space, which is home to applications and services.  Different operating systems have different levels of modularity and functional “splits” between kernel space and user space, but all architectures are conceptually very similar.  While hypervisors run virtual machines that make up both spaces, containers virtualize just the user space, thus greatly reducing complexity and redundancy.  The immediate benefit is higher performance and less “bloat”, which is extremely important to the economics of cloud computing.  The popularity of containers is a direct result of the realization that hypervisor-based technologies are expensive to host and manage for many types of applications.

Where did Containers Come From?

As is common in software technology, containers are not new.  Most of today’s Linux-based examples can trace their lineage directly back to what is known as a UNIX “chroot jail” (pronounced change root jail), developed in the late 1970’s and added to BSD (a popular UNIX variant) in 1982.  While today’s technology is far more advanced, the underlying concept is the same.  A chroot jail allows you to load just the user space bits of an application and its dependencies, and then lock execution inside that space (hence the term “jail”).  Code running in the jail has no visibility to the underlying host operating system it’s running on, except when it interfaces with kernel space (e.g. accesses storage, networking, graphics, etc.).  Kernel space remains outside the jail.  A host can run any number of jails, and processes running inside the jail(s) look like normal applications and services to the host kernel.  For example, if you wanted to run an Apache web server inside a chroot jail, you would only need to create a directory structure that contains the Apache binary code, as well as the libraries it needs to load (such as the C runtime library, etc.).  When you run that Apache server, it can only see files that live in its jail, which results in a very secure way to run web services.

Why Would Anyone Use Virtual Machines?

Containers have limitations of course.  Most significant is the applications they run must be compatible with the underlying “host” operating system.  For example, you cannot run a Linux application on top of a Windows host, as you can when using Hyper-V.  But on the flipside, there are tremendous benefits, such as performance, orchestration velocity, and hardware accessibility.  And to further counter the kernel compatibility limitation, containers may indeed host different “flavors” or “personalities” of the same operating system.  For example, it is possible to run unmodified Red Hat Linux applications on an Ubuntu kernel, and vice-versa.

Virtual machines used to be more secure than containers, providing higher isolation than traditional chroot jails.  But this is no longer universally true, as new facilities in the Linux kernel have pretty much closed this gap.

There are really only 2 major reasons to use Virtual Machines over containers.  One is if you need to run applications that are not compatible with the host operating system.  Since Linux is by far the most popular platform for cloud computing applications, including both computationally intensive (HPC) as well as web services (e.g. LAMP based), this is becoming a narrow use case.  The second is if you need to be able to transparently migrate workloads between hosts, to rebalance utilization for example.  This is interesting to commodity clouds that oversubscribe workloads, but really has no place in a High Performance Computing environment where each node “host” typically only runs one application at a time.

More About Performance Benefits of Containers

Containers run applications faster than virtual machines because they do a lot less work, resulting from far less redundant code.  For example, if you want to establish an outbound TCP/IP connection inside a container, you simply call the host kernel (outside the container) to do it and get back a socket you can use immediately.  In a virtual machine, you have to emulate the entire network subsystem, including the driver.  This means multiple OSI layers that the host kernel already supports, since the application inside the virtual machine cannot just call the host kernel directly.  That same TCP/IP connection inside a virtual machine means calling the virtualized kernel, which in turn hands the request over to its virtualized network subsystem, which in turn runs the virtualized TCP state machine, which in turn injects packets into the virtualized network driver, which then goes through a host bridge, which then goes to the physical network.  No matter how fast the virtualized code is, and how much paravirtualization is used, this is still a lot of redundant work.  If you are oversubscribing, every single virtual machine running on the host must in turn do the same thing – so not only is the functionality redundant between the kernel and the virtual machine, it’s redundant among all the virtual machines.

If you are not oversubscribing, such as is desirable in HPC, you still pay a “hypervisor tax”.  While not competing for resources with other virtual machines, it must still waste time executing a lot of redundant code.  Some virtualization vendors place the cost of the “hypervisor tax” in this single virtual machine mode at 5% or less, but in practice, degradation can be much higher.  Even 5-10% still adds up at scale.  If you are an HPC-focused ISV, chances are you’ve painstakingly optimized every bit of code to leave nothing on the table.  You’ve labored day and night to win that last 1-2% of performance.  Why would you dilute your hard work with the built-in overhead of virtual machines rather than enjoy the bare metal performance of containers?

Containers are changing the game in cloud computing, despite the large installed base of traditional hypervisor-based infrastructure.  In a world where economics rule, better performance, less overhead, and faster orchestration are leading the charge to adoption of this exciting technology.