ANSYS Fluent
Help with ANSYS HPC Workflows in the Nimbix Cloud

In this article we will show you how to get started with HPC Workflows using ANSYS in the Nimbix Cloud. ANSYS and Nimbix have deployed the ANSYS software portfolio on the Nimbix cloud platform, offering the highest performance lowest cost cloud solution for ANSYS applications. This article reviews where your user data is stored in the Nimbix Cloud, and how to configure your HPC Workflows for various ANSYS applications.


For advanced HPC configuration for Electronics Desktop, refer to the manuals on the Ansys Customer Portal:

You will need to follow these steps for each new job session that you run. To configure HPC with the RSM Manager in Electronics Desktop, follow the steps below. Buttons are highlighted in the image and labeled with the corresponding step number.

  1. Open the HPC configuration menu.
  2. Select your Design Type from the drop down, such as HFSS.
  3. Click “Add” to open the panel to create a new configuration.
  4. Name this configuration “JARVICE”. Note: If JARVICE exists from a previous session, you should delete it and re-create it as the MACHINES.txt file changes per job.
  5. Delete any existing hostnames from the list, then click “Import Machines from File.”
  6. Use the file /home/nimbix/MACHINES.txt, which contains the hostnames and number of cores for your session. Then click “OK” to close out of these menus.
  7. Finally, from the dropdown, select “JARVICE” as your active configuration.

You may now right click on your project in the left project navigation panel, click “Analyze All”, or if you prefer “Submit Job,” and import the JARVICE configuration to submit your job to RSM.

To configure your HPC Workflow for Icepak in interactive mode, open the Solve->Settings->Parallel. If you have more than one node, you must select Network Parllel and load the cores file, /etc/JARVICE/cores. Select yes when it asks if you want to overwrite the file. You may optionally want to select Auto-save Interval.

CFX is available on the Nimbix Cloud in both graphical and batch modes. It is recommended to use pre- and post-processing in graphical mode, and long-running simulations in batch mode. For running simulations in batch mode, no further configuration is needed for HPC. It is handled automatically by JARVICE, the software powering the Nimbix Cloud.

If your workflow does not permit batch operations, you can configure HPC for the CFX Solver from the graphical interface. We have already configured your user’s environment to have a list of host, but you will have to manually add those hosts to your HPC configuration.

The steps to manually configure HPC for your run definition:

  1. Open File->Define Run.
  2. Select Run Mode: Intel MPI Distributed Parallel.
  3. Clear the existing host(s) from the host list.
  4. Press the Insert Host button, which will display a list of available hosts for your HPC environment.
  5. Select all of the hosts listed.
  6. Update the partitions to 16 to use one core per partition.
  7. Configure the working directory to be in a project directory stored in your /data or /home/nimbix/data directory. This storage space is shared across all nodes.
  8. Start Run to begin your HPC analysis.
Screenshot of Fluent HPC in the Nimbix Cloud
Fluent is automatically configured in the Nimbix Cloud to use the number of nodes you select in the Nimbix Task Builder in both graphical and non-graphical mode. It also uses low-latency Infiniband interconnect by default. Simply run your simulation as usual to take advantage of re-configurable HPC in the cloud.
Application Guides

For more information on tuning your applications for HPC use (memory consumption, cores, tasks, etc…), refer to the ANSYS Customer Portal at


If you have any questions please contact sales at or for technical assistance, email  Our team will assist with standard and customize workflows to better serve your simulation needs.

Get Started

Get Started

Need Help?

Need Help?