Share this:

Nimbix supports a variety of interconnect options ranging from our standard 1GB/s Ethernet to 56Gb/s FDR InfiniBand. As with most things one size does not fit all. Not all applications need or can benefit from InfiniBand, but for many of them there is a noticeable performance benefit. Especially HPC applications that leverage parallel processing such as climate research, molecular modeling, physical simulations, crypto analysis, geophysical research, automotive and aerospace design, financial modeling, data mining and more.

Our deployment of InfiniBand offers low-latency, high-bandwidth, high message rate, transport offload to facilitate extremely low CPU overhead, Remote Direct Memory Access (RDMA), and advanced communications offloads. It takes advantage of the world’s fastest interconnect, supporting up to 56Gb/s and extremely low application latency (as low as 1 microsecond).

So what kind of difference can InfiniBand make? Latency and bandwidth are the two most common performance parameters used in comparing between interconnects. Mellanox Technologies (A leading supplier of end-to-end InfiniBand and Ethernet interconnect solutions) has facilitated the testing of its InfiniBand solutions via the Compute Cluster Center operated by the HPC Advisory Council.

Here is a benchmark from Mellanox’s website that helps highlight the differences:


Mellanox 56Gb/s FDR IB

Intel 40Gb/s QDR IB

Intel 10GbE NetEffect NE020


6.8 GB/s

3.2 GB/s

1.1 GB/s





Message Rate

137 Million msg/sec

30 Million msg/sec

1.1 Million msg/sec

Source: Mellanox Technologies testing; Ohio State University; Intel websites

InfiniBand availability in the cloud opens up new application opportunities for customers. A great real world example is illustrated in our recent participation in the Ubercloud HPC Experiment. Nimbix worked with Simpson Strong-Tie, Simulia Dassault Systems, DataSwing Corporation, NICE Software and Beyond CAE to leverage our InfiniBand enabled cloud infrastructure on NACC to accelerate heavy duty ABAQUS structural analysis:

The job: Cast-in-Place Mechanical Anchor Concrete Anchorage Pullout Capacity Analysis
Materials: Steel & Concrete
Procedure: 3D Nonlinear Contact, Fracture & Damage Analysis
Number of Elements: 1,626,338
Number of DOF: 1,937,301
Run time:
Single 12 core system = 29 hours 03 minutes 41 seconds
InfiniBand enabled 72 core parallel computing cluster = 05 hours 30 minutes 00 seconds

Certainly performance increase is driven by the ability to bring more processing cores to the problem, but the compute nodes must have the low latency switch interconnect provided by InfiniBand. Without it, this kind of application cannot be successfully scaled to take advantage of more compute horsepower. So the availability of InfiniBand in a high performance compute cloud opens up new application options for end users who wish to leverage available cloud resources.

In summary, not every application needs InfiniBand, but as with most things in this world options are important. That is why Nimbix offers a range of solutions around interconnection of both our NACC cluster and the custom turn-key clusters we deploy for our clients.