PushToCompute ™ Tutorial

What is PushToCompute?

PushToCompute, a platform capability of JARVICE, is a method for rapid cloud deployment of containerized compute intensive applications.  PushToCompute automatically converts Docker images to the native JARVICE format as part of the workflow.  The JARVICE container runtime is optimized for high performance and accelerated applications, and powers all workflows available in the Nimbix Cloud.

PushToCompute Features

  • Standard Docker container format for building and packaging
  • Automatic conversion to high performance JARVICE format for runtime
  • Automatic high performance platform orchestration during Nimbix Cloud runtime, including:
    • Baremetal execution (no hypervisor overhead)
    • Applications start in seconds (or faster, depending on requested hardware), leveraging Nimbix’s high performance distributed platform storage architecture
    • Advanced accelerators and coprocessors (supercomputing GPUs, FPGAs)
    • InfiniBand RDMA interconnects
    • Heterogeneous data management with POSIX filesystem abstraction (in /data)
    • Parallel and distributed application support at large scale (multiple nodes, large memory, distributed scratch storage)
  • Deployment automation via Web Hooks
  • Single node “JARVICE emulator” built-in to base images for unit testing
  • Supports both interactive (graphical, web-based, etc.) and batch (solvers, etc.) applications
  • Point-and-click setup for basic application definitions, with additional options to create JARVICE portal front-ends for workflows using JSON definitions
  • Optional pricing and publishing of your applications (contact Nimbix for details)

Getting Started

Prerequisites

  • A Nimbix Cloud account with a payment method on file
  • A public or private Docker repository on a registry (e.g. Docker Hub), which derives from a Nimbix base image
  • Optionally, a source repository from which to build (e.g. GitHub)

Workflow Examples

  • unit test -> Docker build -> Docker push to registry -> Explicit pull from JARVICE portal
  • unit test -> Docker build -> integration test -> Docker push to registry -> JARVICE deployment via Web Hook from registry
  • unit test -> push to GitHub -> automated build in Docker Hub -> JARVICE deployment via Web Hook from Docker Hub

Base Images

Nimbix provides Docker base images for various flavors and configurations of Linux – using other base images in the FROM directive of your Dockerfile may work, but are not explicitly supported.  If your application does not work with a 3rd party base image, please reconstruct it using a Nimbix base image before contacting Support.  You can search for Nimbix base images as follows:

docker search nimbix

The images are tagged by version.  For example, centos-base has both a 6 and a 7 at the time of this writing.  The latest tag always refers to the newest version that is not considered experimental.  There are also 3 types of image for each flavor/version: base, desktop, and cuda.  Use base for batch-type applications that do not depend on CUDA, desktop for interactive graphical applications, and cuda for applications leveraging the NVIDIA CUDA toolkit.  Nimbix will package the latest current release (not RC) version of NVIDIA CUDA.  When new versions are released, they will be added but the existing ones in the cuda image will not be removed.  If your applications depends on a specific version of CUDA, it is best to set that version explicitly rather than just rely on the /usr/local/cuda symlink, which will always point to the latest.

Limitations and Caveats

  • You may not install kernel drivers inside your containers.  JARVICE automatically deploys the appropriate NVIDIA driver at runtime if you select GPU hardware.  If you wish to install an earlier version of CUDA, make sure you install just the toolkit, not the kernel driver as well.  You can check the Dockerfile for any of the cuda base images for details on doing this from the NVIDIA repositories.
  • ENTRYPOINT and EXPOSE commands are ignored in the JARVICE runtime.  Use these only for unit testing; any command you run on JARVICE must be explicit, so consider using a wrapper script and launching that as part of a docker run command to test your application instead.
  • OpenGL server-side rendering is available but defaults to using the latest Mesa client libraries available for the base image chosen (desktop and cuda images only).  If you wish to use the NVIDIA OpenGL libraries on NVIDIA GPU-based systems, you must create a configuration file explicitly (see below).  Note that on non-GPU nodes JARVICE automatically deploys Mesa with software rendering.  On GPU nodes Mesa performs hardware rendering automatically, but there may be cases where the NVIDIA libraries are preferred.
  • JARVICE offers support for Bitfusion Boost automatically.  This is native to the platform and you should not attempt to install any packages inside your containers.  JARVICE automatically deploys the Boost client when you select the Bitfusion-powered machine type at runtime.  There is also no need to prefix your commands with bfboost client, as this is inherent in the platform.
  • If using PushToCompute, you should not manage or run images directly using the Images or Launch tabs in the portal.  Doing so may interfere with your future application deployment and may deliver a different experience.  You will see your images with the extension .app – these are images managed by PushToCompute and should not be launched or modified directly.
  • Information about the JARVICE runtime environment filesystem layout (known as Nimbix Application Environment) can be found in the section Layout of The Nimbix Application Environment in the JARVICE Quick Start Guide.
  • Applications run as the nimbix user, which is managed automatically by JARVICE at runtime.  Nimbix base images configure this user for passwordless sudo access.  When unit testing, you run your commands as follows:
    # batch workflows
    docker run -h JARVICE <image> \
      /usr/lib/JARVICE/tools/sbin/init <command-line>
    
    # graphical applications (redirect ports for VNC/HTML5 access)
    docker run -h JARVICE -p 5901:5901 -p 443:443 \
      /usr/lib/JARVICE/tools/sbin/init /usr/local/bin/nimbix_desktop \
      <command-line>

Example PushToCompute Application: CUDA deviceQuery

For this example, we will deploy the NVIDIA CUDA deviceQuery application from the CUDA examples so that we can query GPUs in the Nimbix Cloud.  We will cover the entire workflow using sources in GitHub, container repository in Docker Hub, and runtime application in JARVICE.  Note that the GitHub account is optional – you can build and push containers directly to a registry instead.  The purpose of using GitHub in the example is to illustrate an end-to-end workflow integration.

Our example also assumes that you are familiar with the concept of Automated Builds on Docker Hub – again if you prefer to push directly to Docker Hub you can skip this, but it’s assumed in our sample workflow.

For this example we will use the nimbixdemo/devicequery repository in GitHub, and will perform automated builds in Docker Hub to its own nimbixdemo/devicequery repository.

Step 1: Creating the application in JARVICE

We’ll setup the workflow “right to left”, starting with the final target, which is the application in JARVICE.

After logging into the JARVICE portal, we’ll click the Apps tab at the top to enter the application management page:

PushToCompute

Since we’ll be using a public repository in the Docker Hub, we don’t need to log in to the registry.  Otherwise, we would just click that log in link at the top right and follow the instructions.

Next we’ll click the Create button and fill in the details – in this case we’ve also added an NVIDIA logo icon by clicking Load image and selecting it from our local computer.  We don’t have to name the JARVICE application the same as the repository image (minus the namespace, which is not permitted on JARVICE as it’s already implied in your user account), but it’s good practice:

PushToCompute

After clicking Save we see our application in the checked out section.  We’ll check it in as we are not going to modify the definition at this time (we’ll use the default for this tutorial).  After selecting the app and clicking the Check In button, we see the application checked in:

PushToCompute

We’ll select the app and click the Docker Pull button only to get the Web Hook to use.  We won’t complete the pull here, as it would fail since we haven’t pushed anything or even created any repositories.  But this way we can get the Web Hook that we’ll put into our Docker Hub account for the corresponding repository:

PushToCompute

We’ll right click on the underlined word link and copy that to the clipboard.  That’s the Web Hook URL.  We can always GET that URL manually, or use the Docker Pull button in the JARVICE portal to pull the latest application image immediately.  But for this tutorial and to illustrate the power of automation, we’ll use the Web Hook to trigger pulls into JARVICE when code is pushed to GitHub.

Step 2: Creating an Automated Build in Docker Hub

Next we’ll create an Automated Build in Docker Hub.  Again the assumption is that you’ve linked your GitHub account already as explained above.  If you prefer not to link GitHub and Docker Hub, you can just create a regular repository and docker push directly to it.  But for this tutorial we’ll do the link to illustrate the end to end workflow.  You can use either public or private repositories in each system, but remember that to use a private docker registry repository you need to log in via the JARVICE portal’s link as explained above.

To create the automated build repository after linking your GitHub and Docker Hub accounts, use the Create Automated Build option in the Create menu:

PushToCompute

We’ll follow the steps to select the nimbixdemo/devicequery repository (that we should already have created in GitHub ahead of time), and finally arrive at the create screen:

PushToCompute

We will accept the default branch and tag behavior by clicking the Create button.  Next we’ll add the Web Hook we copied to the clipboard:

PushToCompute

By clicking Save, we’ve completed the entire setup for this application!  Next it’s time to build and push the bits themselves…

Step 3: The Application Source

This is the Dockerfile for the application that we will push to the nimbixdemo/devicequery repository on GitHub:

FROM nimbix/ubuntu-cuda:trusty

RUN make -C /usr/local/cuda/samples/1_Utilities/deviceQuery
RUN ln -s /usr/local/cuda/samples/1_Utilities/deviceQuery/deviceQuery /usr/bin

Notice that the nimbix/ubuntu-cuda base image already has CUDA installed, so all we need to do is build the program we want to make available.  For convenience we’re linking it to /usr/bin to avoid having to type the full path when running it.

In this particular case our local machine does not have an NVIDIA GPU.  If it did, we could unit test this by building and running the container with an expanded NVIDIA .run package mapped (or copied) into a container directory called /usr/lib/JARVICE/NVIDIA.  If the base image init program (as illustrated above) finds nvidia-installer in that path, it executes it to install the non-kernel portions of the NVIDIA runtime package.  It assumes that you are passing the appropriate devices into the container as well.

To get this application into JARVICE, simply commit and push with git (to origin master in this case).  Once you’ve done that, you can of course monitor the build in the Docker Hub:

PushToCompute

Once this complete it’s back to our inbox to see JARVICE in action.  We get an email when the Docker pull starts, and when it completes:

PushToCompute

Note that it may take a few minutes for each step to complete, including the emails themselves.  This is highly dependent on external services, such as GitHub, Docker Hub, and even the Internet at large.  Most PushToCompute events should take less than 10 minutes to complete (all the way from GitHub), or less than 5 minutes (if pushing directly to Docker Hub).

Running the Application

Once we receive confirmation of the Docker image pull complete from JARVICE, we can now run our application.  We’ll go back to the JARVICE portal Compute tab and scroll to the bottom to the Uncategorized category, which filters other catalog applications out.  New applications we create are in this category by default (we can edit the application definition later to make adjustments to the JSON if needed):

PushToCompute

We can run the command we need either in batch or with the GUI, although it’s easier to run in batch because it’s not a GUI application (and would therefore need to be started in a graphical terminal).  So we’ll select the batch option which brings us to the Task Builder:

PushToCompute

Notice that since it’s a GPU application, we selected a GPU machine type to run it on.  We also wrote in the deviceQuery command, which we put in /usr/bin with the Dockerfile and is the program we want to execute.  After starting the job, we can check the output from the dashboard and see that the command ran and performed the CUDA device query we were looking for:

PushToCompute

 

Subsequent GitHub pushes will perform the same process over again, which is how you would maintain your application.  You can manage multiple applications, and Nimbix can also offer one or more of your applications in the public catalog, for a price of your choosing.  By default PushToCompute applications are private to your account, but if you are interested in publishing one or more applications, please contact us for details.

This concludes the PushToCompute tutorial.