Difference between revisions of "Compiling and running MPI programs"

From IT Service Wiki
Jump to: navigation, search
(Infiniband)
Line 16: Line 16:
 
== Infiniband ==
 
== Infiniband ==
  
The latest nodes of DFG Cluster have Infiniband Network. It is accessible through the queue 'dfg-ib' and is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comperison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.
+
The latest nodes of DFG and 'quantum' Cluster have Infiniband Network. It is accessible through the queue 'dfg-ib' or 'quantum' and is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comperison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.

Revision as of 12:47, 9 August 2010

We only support openMPI, currently the best MPI implementation. The widely used MPICH2 implementation, is difficult to maintain and Debian/Ubuntu packages are not available. The lamMPI implementation has stopped development in favour to openMPI.

Compiling

To compile MPI programms your are recommended to use the commands mpicc, mpif77, mpif90. By default this wrapper refer to the GNU Compilers gcc and gfortran. To use the wrapper with the Intel Compilers you have to define some environment variables.

Bash users can add the following lines to their .bashrc:

export OMPI_FC='ifort'
export OMPI_F77='ifort'
export OMPI_CC='icc'

Especially if you use icc, it will be better if use the recent Intel Compiler 11.0.

Infiniband

The latest nodes of DFG and 'quantum' Cluster have Infiniband Network. It is accessible through the queue 'dfg-ib' or 'quantum' and is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comperison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.