Parallel Computing: How to Strike a Balance Between Speed, Cost, and Model Limitations


December 2, 2021

I am a FEM analyst, and while my solvers run in a parallelized environment, I am always searching for ways to optimize cost vs. timelines without compromising the quality of the results. I have 10+ years of experience designing and optimizing consumer and industrial products using multiphysics simulations and transitioned to running simulations on a cloud-based platform about five years ago due to increased project complexity. This article explores several widely used methods to balance computational modeling cost, speed, and accuracy using parallel computing.

Parallel computing enables data scientists and analysts to develop and solve complex models faster and with higher accuracy than ever before, accelerating innovation, shortening the product development cycle, and reducing total development costs.

Complex models and massive datasets come at a cost, specifically computation costs. It takes active project and resource management to ensure that the computational costs, model complexity, and desired results are balanced.

Below are a few methods that analysts use to maximize the efficiency of the cloud using parallel computing.

Prioritize cost vs. speed based on your project needs

Every project is unique, and every client, whether internal or external, has different prioritization needs. Some analysts value cost over speed, while others are willing to pay a premium for faster results. Making sure that you, as an analyst, establish your or your client’s needs before you commit to a project will help you select the right modeling approach. Most cloud providers charge based on computation time and use of resources. The more time you spend on the cloud and the more processing units such as CPUs or GPUs you use, the higher the cost.  

Get to know your hardware

It pays to know the pros and cons of the hardware your computations will run on. Computing offers a range of hardware options including CPUs, GPUs, dedicated storage, standard vault storage to name a few. A model that requires computational power may be more efficiently run using GPUs rather than CPUs. Whereas, large models require substantial storage space and RAM. 

Choosing and using the right resources for a project will help you balance the parallel computing costs with the project time. 

Your familiarity with the cloud hardware and the software used to run the cloud, part of the comprehensive cloud architecture, is essential. Parallel computing requires sending packets of data that are constantly sent between computers. If the data is sliced and diced into too many buckets, using parallelization may not be advantageous.

Master the software you use

Not all software is created equal. It is no secret that some of the less demanding tasks, such as pre-processing and meshing of finite element software, are not parallelized. The reason is that just like FEA Analysts must manage our resources, so must software companies. Most of the FEM software developer's effort is put into parallelizing solvers, process workflow integration customization, and less into the pre and post-processing tasks. 

A strategy widely employed by FEA analysts is to use serial processes for pre-processor and meshing tasks with low graphic requirements and then switch to parallel computing when the solver is involved. Figure 1 below shows the node utilization by Fluent mesher (left) vs. Fluent solver (right) in a classic Icepak setting.

CPU utilization during meshing (top left) and solving (bottom right)
Click the image to enlarge

Use the power of parallel computing when it is needed and leave the menial tasks to local processing

Pre and post-processing, specifically pre-processing tasks, require fewer cloud resources than the solver. Most finite element analysts will prepare models locally and run them on the cloud. This saves time and money and optimizes license utilization.   

Use batch processing whenever possible

Most modeling software today takes advantage of batch processing. Batch processing closes the graphical user interface and therefore saves resources needed to run the GUI. Highly optimized cloud infrastructure also provides automatic solver shut-off at the end of the run for batch processing. In the long run, it can reduce computation costs by avoiding idle cores. 

The velocity plot of natural convection in an LED bulb, CFD analysis in Icepack

The velocity plot of natural convection in an LED bulb, CFD analysis in Icepack

Know before you solve

Estimate model complexity and assign adequate resources based on your estimate. 

The number of cores and the hardware used to run this simulation in batch mode was optimized for a large model, more than 30 million nodes, with slow, under-relaxed convergence. 

An essential criterion for selecting a cloud service provider should be the choice of solvers that the cloud provider offers. The Nimbix Cloud provides access to various solvers, including Star-CCM+, Ansys, Abaqus, COMSOL, Keysight, M-Star, OpenCAD, Abaqus OpenFOAM, and TensorFlow to name a few. Having access to this wide range of commercial and open-source software in the HyperHub Application Marketplace is why I use the Nimbix Cloud. 

Develop the right metrics and track them

As with any project or process, having the right metrics in place, such as CPU utilization, license utilization, and dead-time, may mean the difference between effective and ineffective cloud utilization. Identifying trends such as periods of high usage, spikes in license or cloud costs allows the analyst and project manager to optimize process flow and reduce costs. 

Focus on delivering the right results the right way without compromise. It is tempting to try to please the customer, whether in terms of time or cost, but as an FEA analyst, you should focus on the accuracy of your results. 

There are many levers that data analysts can pull to balance cloud computing costs while delivering the right results promptly. Knowing the tradeoff between price and performance will help you use the many benefits parallel processing offers while keeping the computational costs to a minimum. 

Subscribe to our quarterly email newsletter to get resources like this in your inbox!