Provided by: openmpi-doc_5.0.7-1_all 

SYNTAX
C Syntax
#include <mpi.h>
int MPI_Probe(int source, int tag, MPI_Comm comm, MPI_Status *status)
Fortran Syntax
USE MPI
! or the older form: INCLUDE 'mpif.h'
MPI_PROBE(SOURCE, TAG, COMM, STATUS, IERROR)
INTEGER SOURCE, TAG, COMM, STATUS(MPI_STATUS_SIZE), IERROR
Fortran 2008 Syntax
USE mpi_f08
MPI_Probe(source, tag, comm, status, ierror)
INTEGER, INTENT(IN) :: source, tag
TYPE(MPI_Comm), INTENT(IN) :: comm
TYPE(MPI_Status) :: status
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
INPUT PARAMETERS
• source: Source rank or MPI_ANY_SOURCE (integer).
• tag: Tag value or MPI_ANY_TAG (integer).
• comm: Communicator (handle).
OUTPUT PARAMETERS
• status: Status object (status).
• ierror: Fortran only: Error status (integer).
DESCRIPTION
The MPI_Probe and MPI_Iprobe operations allow checking of incoming messages, without actual receipt of
them. The user can then decide how to receive them, based on the information returned by the probe in the
status variable. For example, the user may allocate memory for the receive buffer, according to the
length of the probed message.
MPI_Probe behaves like MPI_Iprobe except that it is a blocking call that returns only after a matching
message has been found.
If your application does not need to examine the status field, you can save resources by using the
predefined constant MPI_STATUS_IGNORE as a special value for the status argument.
The semantics of MPI_Probe and MPI_Iprobe guarantee progress: If a call to MPI_Probe has been issued by a
process, and a send that matches the probe has been initiated by some process, then the call to MPI_Probe
will return, unless the message is received by another concurrent receive operation (that is executed by
another thread at the probing process). Similarly, if a process busy waits with MPI_Iprobe and a matching
message has been issued, then the call to MPI_Iprobe will eventually return flag = true unless the
message is received by another concurrent receive operation.
Example 1: Use blocking probe to wait for an incoming message.
CALL MPI_COMM_RANK(comm, rank, ierr)
IF (rank == 0) THEN
CALL MPI_SEND(i, 1, MPI_INTEGER, 2, 0, comm, ierr)
ELSE IF(rank == 1) THEN
CALL MPI_SEND(x, 1, MPI_REAL, 2, 0, comm, ierr)
ELSE ! rank == 2
DO i=1, 2
CALL MPI_PROBE(MPI_ANY_SOURCE, 0,
comm, status, ierr)
IF (status(MPI_SOURCE) = 0) THEN
CALL MPI_RECV(i, 1, MPI_INTEGER, 0, 0, status, ierr)
ELSE
CALL MPI_RECV(x, 1, MPI_REAL, 1, 0, status, ierr)
END IF
END DO
END IF
Each message is received with the right type.
Example 2: A program similar to the previous example, but with a problem.
CALL MPI_COMM_RANK(comm, rank, ierr)
IF (rank == 0) THEN
CALL MPI_SEND(i, 1, MPI_INTEGER, 2, 0, comm, ierr)
ELSE IF(rank == 1) THEN
CALL MPI_SEND(x, 1, MPI_REAL, 2, 0, comm, ierr)
ELSE
DO i=1, 2
CALL MPI_PROBE(MPI_ANY_SOURCE, 0,
comm, status, ierr)
IF (status(MPI_SOURCE) == 0) THEN
CALL MPI_RECV(i, 1, MPI_INTEGER, MPI_ANY_SOURCE, &
0, status, ierr)
ELSE
CALL MPI_RECV(x, 1, MPI_REAL, MPI_ANY_SOURCE, &
0, status, ierr)
END IF
END DO
END IF
We slightly modified Example 2, using MPI_ANY_SOURCE as the source argument in the two receive calls in
statements labeled 100 and 200. The program is now incorrect: The receive operation may receive a message
that is distinct from the message probed by the preceding call to MPI_Probe.
ERRORS
Almost all MPI routines return an error value; C routines as the return result of the function and
Fortran routines in the last argument.
Before the error value is returned, the current MPI error handler associated with the communication
object (e.g., communicator, window, file) is called. If no communication object is associated with the
MPI call, then the call is considered attached to MPI_COMM_SELF and will call the associated MPI error
handler. When MPI_COMM_SELF is not initialized (i.e., before MPI_Init/MPI_Init_thread, after
MPI_Finalize, or when using the Sessions Model exclusively) the error raises the initial error handler.
The initial error handler can be changed by calling MPI_Comm_set_errhandler on MPI_COMM_SELF when using
the World model, or the mpi_initial_errhandler CLI argument to mpiexec or info key to MPI_Comm_spawn/‐
MPI_Comm_spawn_multiple. If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN
error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error handler is called for all
other MPI functions.
Open MPI includes three predefined error handlers that can be used:
• MPI_ERRORS_ARE_FATAL Causes the program to abort all connected MPI processes.
• MPI_ERRORS_ABORT An error handler that can be invoked on a communicator, window, file, or session. When
called on a communicator, it acts as if MPI_Abort was called on that communicator. If called on a
window or file, acts as if MPI_Abort was called on a communicator containing the group of processes in
the corresponding window or file. If called on a session, aborts only the local process.
• MPI_ERRORS_RETURN Returns an error code to the application.
MPI applications can also implement their own error handlers by calling:
• MPI_Comm_create_errhandler then MPI_Comm_set_errhandler
• MPI_File_create_errhandler then MPI_File_set_errhandler
• MPI_Session_create_errhandler then MPI_Session_set_errhandler or at MPI_Session_init
• MPI_Win_create_errhandler then MPI_Win_set_errhandler
Note that MPI does not guarantee that an MPI program can continue past an error.
See the MPI man page for a full list of MPI error codes.
See the Error Handling section of the MPI-3.1 standard for more information.
Note that per the “Return Status” section in the “Point-to-Point Communication” chapter in the MPI
Standard, MPI errors on messages queried by MPI_Probe do not set the status.MPI_ERROR field in the
returned status. The error code is always passed to the back-end error handler and may be passed back to
the caller through the return value of MPI_Probe if the back-end error handler returns it. The
pre-defined MPI error handler MPI_ERRORS_RETURN exhibits this behavior, for example.
SEE ALSO:
• MPI_Iprobe
• MPI_Cancel
COPYRIGHT
2003-2025, The Open MPI Community
Feb 17, 2025 MPI_PROBE(3)