MPI Tutorial

This is a very short MPI Tutorial. It is a slightly modified version of this tutorial; and is strongly based on the Argonne National Laboratory MPI Tutorial. This short tutorial is not intended to give complete coverage on MPI. The idea is just to cover MPI's basic concepts.

For some good further examples see here.


What's MPI?

A standard for writing message-passing parallel programs. MPI is designed for writing data parallel applications, i.e. applications whose tasks run the same code (but processes different data).

Most MPI implementations are provided by MPP vendors. These implementations take special advantage of the hardware characteristics to boost performance. There is a free (and popular) implementation called MPICH. See also the Open MPI implementation, which is the one used in lectures.


Writing MPI programs

#include "mpi.h"
#include <stdio.h>

int main( argc, argv )
int argc;
char **argv;
{
MPI_Init( &argc, &argv );
printf( "Hello world\n" );
MPI_Finalize();
return 0;
}

Compiling and linking

For simple programs, compiler commands can be used directly. For large projects, it is best to use a standard Makefile.

The MPICH implementation provides the commands mpicc and mpif77 as well as Makefile examples in /usr/local/mpi/examples/Makefile.in

The commands

mpicc -o first first.c
mpif77 -o firstf firstf.f

may be used to build simple programs when using MPICH.


Running MPI programs

mpirun -np 2 hello 
mpirun is not part of the standard, but some version of it is common with several MPI implementations. 

In the MPICH implementation of MPI, mpirun reads by default a file called util/machines/machines.<arch> that describes which processors can be used (the default can be overridden through option -machinefile). Tasks are going to be started using a command defined by the user (rsh by default).

The option -t shows the commands that mpirun would execute; you can use this to find out how mpirun starts programs on your system. The option -help shows all options to mpirun.

 


Finding out about the environment

Two of the first questions asked in a parallel program are: How many processes are there? and Who am I?

How many is answered with MPI_Comm_size and who am I is answered with MPI_Comm_rank. The rank is a number between zero and size-1.

MPI_Comm_size( MPI_COMM_WORLD, &numprocs ); 
MPI_Comm_rank( MPI_COMM_WORLD, &myid );

MPI_COMM_WORLD is a communicator. Communicators are used to separate the communication of different modules of the application. Communicators are essential for writing reusable libraries.


Point-to-Point Messages

The basic (blocking) send is:

MPI_Send( start, count, datatype, dest, tag, comm ) 
and the receive:
MPI_Recv( start, count, datatype, source, tag, comm, status ) 

The source, tag, and count of the message actually received can be retrieved from status

datatype allows for the description of arbitrary data structures. Defaults are provided for native types (int, long, double, etc).

There are many flavors of send and receive in MPI. Their slightly different semantics allows for performance optimizations that take advantage of special features of execution platform.

Using MPI_Send and MPI_Recv gives the programmer an abstraction level very similar to TCP sockets.


Six Function MPI

MPI is very simple. These six functions allow you to write many programs:

MPI_Init
 
MPI_Finalize
 
MPI_Comm_size
 
MPI_Comm_rank
 
MPI_Send
 
MPI_Recv
 

Collective Communication

Unlike sockets, MPI also provides primitives for collective communication. Collective communication involves many tasks of the application. The make very good sense for parallel computation, but not for other forms of distributed computation (client-server, for example).

Two simple collective operations:

MPI_Bcast spreads data from the root task to all tasks in the communicator comm.

MPI_Bcast( start, count, datatype, root, comm ) 

MPI_Reduce combines data from all processes in the communicator (using operation), and returns the result to the task root.

MPI_Reduce( start, result, count, datatype, operation, root, comm ) 

Eight Function MPI

MPI is very simple. These eight functions allow you to write many programs:

MPI_Init
 
MPI_Finalize
 
MPI_Comm_size
 
MPI_Comm_rank
 
MPI_Send
 
MPI_Recv

MPI_Bcast

MPI_Reduce


C example: PI

#include "mpi.h" 
#include <math.h>
int main(argc,argv) 
int argc;
char *argv[];
{

int done = 0, n, myid, numprocs, i, rc;
double PI25DT = 3.141592653589793238462643; double mypi, pi, h, sum, x, a;
MPI_Init(&argc,&argv); 
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
while (!done){ 
if (myid == 0) {
printf("Enter the number of intervals: (0 quits) ");
scanf("%d",&n);
}
MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (n == 0) break;

h = 1.0 / (double) n;
sum = 0.0;
for (i = myid + 1; i <= n; i += numprocs) {
x = h * ((double)i - 0.5);
sum += 4.0 / (1.0 + x*x);
}
mypi = h * sum;
    MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD);
    if (myid == 0) 
printf("pi is approximately %.16f, Error is %.16f\n", pi, fabs(pi - PI25DT)); }

MPI_Finalize();

}

Another Example

jacobi.c