When an MPI process starts, it receives a rank, or node number, to identify itself uniquely. Most MPI functions require the following arguments to communicate data among processes:
The MPI communicator specifies a group of processes inside which a
communication occurs. Initially there is a single communicator,
MPI_COMM_WORLD, which contains all of the processes involved in
the computation. The MPI_COMM_WORLD communicator is sufficient
for the examples presented in this document.
MPI supports both point-to-point and collective communication
operations. The most commonly used point-to-point operations are
MPI_Send() and MPI_Recv(). These send data to a single
destination, and receive data from a single source, respectively. The
following Fortran code segment illustrates how two processes can
communicate with point-to-point operations.
integer myid, ierr, dest, src, sendDataInt
integer tagValue, status(MPI_STATUS_SIZE)
tagValue = 99
call MPI_Comm_rank(MPI_COMM_WORLD, myid, ierr)
if (myid .eq. 0) then
dest = 1
call MPI_Send(sendDataInt, 1, MPI_INTEGER, dest, tagValue,
MPI_COMM_WORLD, ierr)
else if (myid .eq. 1) then
src = 0
call MPI_Recv(newInt, 1, MPI_INTEGER, src, tagValue,
MPI_Comm_world, status, ierr)
endif
This code segment instructs process 0 to send an integer,
sendDataInt, to process 1. Process 1 receives the integer
in the variable newInt. The processes start by finding out their
rank, myid, by calling MPI_Comm_rank().
The MPI_Send() function takes the following arguments: the
initial address of the send buffer (sendDataInt), the number of
elements to send (1), the datatype of each element
(MPI_INTEGER), the rank of the destination process
(dest), a message tag to distinguish this type of message
(tagValue [=> 99]), a communicator that
specifies a group of processes inside which the communication occurs
(MPI_COMM_WORLD), and a parameter to pass back the return code
of the MPI_Send() call (ierr).
The MPI_Recv() function takes similar arguments: the initial
address of the receive buffer (newInt), the number of elements
to receive (1), the datatype of each element
(MPI_INTEGER), the rank of the sending process (src), a
tag value to identify the type of message desired (tagValue
[=> 99]), a communicator to specify a group of processes
inside which the communication occurs (MPI_COMM_WORLD), a
parameter to pass back additional information about the message
(status), and a parameter to pass back the return code of the
MPI_Recv() call (ierr).
Collective operations are used for actions such as broadcasting data between nodes, synchronizing nodes, global reduction operations (e.g. max and sum), and scatterin/gathering data among nodes. The following C code shows how a set of processes can perform a scatter operation. The scatter operation takes an array of a given datatype and distributes its elements across a group of processes by sending each process in the group a subset of the array. The subsets received by each process are equally sized and disjoint from each other.
int myid, size, root;
double myDouble, theDoubles[MAX_NUM_PROCESSES];
MPI_Comm_rank(MPI_COMM_WORLD, &myid);
MPI_Comm_size(MPI_COMM_WORLD, &size);
root = 0;
if (myid == root) {
for(i=0; i<size; i++) theDoubles[i] = sqrt(i);
}
MPI_Scatter(theDoubles, 1, MPI_DOUBLE, &myDouble, 1,
MPI_DOUBLE, root, MPI_COMM_WORLD);
In this example, all nodes call MPI_Comm_rank() and
MPI_Comm_size() to determine their process rank and the size of
the MPI_COMM_WORLD communication group. Next, process 0 is
designated as the root node. The root node fills in the values of array
theDoubles[]. Finally, all processes call
MPI_Scatter(). The call to MPI_Scatter() takes eight
arguments: the address of the send buffer (theDoubles), the
number of elements sent to each process (1), the datatype of
the send buffer elements (MPI_DOUBLE), the address of each
process' receive buffer (myDouble), the number of elements that
the receive buffer contains (1), the datatype of the receive buffer
elements (MPI_DOUBLE), the rank of the root process
(root [=> 0]), and the communicator group
(MPI_COMM_WORLD). In this example, when the MPI_Scatter
completes, the myDouble variable for process i will be
set to theDouble[i], for each i in the range [0,
size-1].
Go to the first, previous, next, last section, table of contents.