Thursday, August 8, 2013
Pushing the limits of simulation with MPI computing
While there are many ways to improve the performance of a workstation, there are always limits on what a single computer can do. When faced with a problem that would be impractical or impossible to solve on one computer, MPI computing offers a way forward.
MPI, which stands for Message Passing Interface, is a system for allowing a computer cluster to act like a single supercomputer. In a simulation with MPI computing, the model is broken down into multiple simulation domains, and each computer in the cluster is sent one of these parts to solve. Unlike distributed computing, these calculations are not independent – after all, electromagnetic waves can pass freely from one domain to another. Domain decomposition is possible because the nodes of the cluster regularly exchange field data at simulation domains, so that the total field across the device can be calculated.
In CST STUDIO SUITE®, MPI computing is currently supported by the transient solver, frequency domain solver, integral equation solver and wakefield solver. While these solvers are powerful, the use of one computer limits the size of the models that can be. With MPI computing, the model is broken down into smaller domains, removing these restrictions – if the cluster is large enough, models of arbitrary size can be simulated. When using the transient solver, MPI computing can even be combined with GPU computing; a cluster with multiple GPU cards overcomes the usual memory limits of a single GPU.
Customers sometimes ask whether they can use MPI computing on their ad-hoc clusters, using workstations or small enterprise servers connected by a standard Ethernet. Unfortunately, this is not really practical – to use MPI computing effectively, it needs to be run on a dedicated supercomputer-type cluster with homogenous nodes and a high-speed interconnect like InfiniBand®. The nodes have to exchange very large amounts of data between each other constantly, and a slow connection will completely negate the benefits of MPI computing. An alternative for these users is to use cloud computing.
The final blog post in this series will explore cloud computing for HPC, and show how it makes the power of MPI computing available to users who don’t have the resources for a dedicated cluster.