Difference between revisions of "Compiling and running MPI programs"

From IT Service Wiki
Jump to: navigation, search
(New page: We only support openMPI, currently the best MPI implementation. The widely used MPICH2 implementation, is difficult to maintain and Debian/Ubuntu packages are not available. The lamMPI imp...)
 
 
(3 intermediate revisions by one other user not shown)
Line 1: Line 1:
We only support openMPI, currently the best MPI implementation. The widely used MPICH2 implementation, is difficult to maintain and Debian/Ubuntu packages are not available. The lamMPI implementation has stopped in favour to openMPI.
+
We only support openMPI, currently the best MPI implementation. The widely used MPICH2 implementation, is difficult to maintain and Debian/Ubuntu packages are not available. The lamMPI implementation has stopped development in favour to openMPI.  
  
== Compiling ==
+
== Compiling ==
  
To compile MPI programms your are recommended to use the commands mpicc, mpif77, mpif90. By default this wrapper refer to the GNU Compilers gcc and gfortran. To use the wrapper with the Intel Compilers you have to define some environment variables.
+
To compile MPI programms your are recommended to use the commands mpicc, mpif77, mpif90. By default this wrapper refer to the GNU Compilers gcc and gfortran. To use the wrapper with the Intel Compilers you have to define some environment variables.
 +
 
 +
Bash users can add the following lines to their .bashrc:
 +
<pre>export OMPI_FC='ifort'
 +
export OMPI_F77='ifort'
 +
export OMPI_CC='icc'
 +
 
 +
</pre>
 +
 
 +
Especially if you use icc, it will be better if you use the recent [[Intel_Compiler_Temp|Intel Compiler 11.0]].
 +
 
 +
== Infiniband ==
 +
 
 +
The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.

Latest revision as of 10:41, 11 November 2019

We only support openMPI, currently the best MPI implementation. The widely used MPICH2 implementation, is difficult to maintain and Debian/Ubuntu packages are not available. The lamMPI implementation has stopped development in favour to openMPI.

Compiling

To compile MPI programms your are recommended to use the commands mpicc, mpif77, mpif90. By default this wrapper refer to the GNU Compilers gcc and gfortran. To use the wrapper with the Intel Compilers you have to define some environment variables.

Bash users can add the following lines to their .bashrc:

export OMPI_FC='ifort'
export OMPI_F77='ifort'
export OMPI_CC='icc'

Especially if you use icc, it will be better if you use the recent Intel Compiler 11.0.

Infiniband

The 'itp'-, 'itp-big'-, 'dfg-xeon'-, 'iboga'-, 'dreama'- and 'barcelona'-nodes have Infiniband Network. It is used by default when using our openMPI installation. Infiniband provides high bandwith with low latency. It can transport 20 GB/s with latency of 4 µs. In comparison, normal Gigabit Ethernet provides 1 GB/s with at least 30 µs latency. Latency is the time a data packet needs to travel from the source to its target. This is the main drawback when using Gigibit Ethernet for MPI communication.