Provided by: openmpi-doc_4.1.6-7ubuntu2_all bug

NAME

       MPI_Startall - Starts a collection of requests.

SYNTAX

C Syntax

       #include <mpi.h>
       int MPI_Startall(int count, MPI_Request array_of_requests[])

Fortran Syntax

       USE MPI
       ! or the older form: INCLUDE 'mpif.h'
       MPI_STARTALL(COUNT, ARRAY_OF_REQUESTS, IERROR)
            INTEGER   COUNT, ARRAY_OF_REQUESTS(*), IERROR

Fortran 2008 Syntax

       USE mpi_f08
       MPI_Startall(count, array_of_requests, ierror)
            INTEGER, INTENT(IN) :: count
            TYPE(MPI_Request), INTENT(INOUT) :: array_of_requests(count)
            INTEGER, OPTIONAL, INTENT(OUT) :: ierror

C++ Syntax

       #include <mpi.h>
       static void Prequest::Startall(int count, Prequest array_of_requests[])

INPUT PARAMETER

       count     List length (integer).

INPUT/OUTPUT PARAMETER

       array_of_requests
                 Array of requests (array of handle).

OUTPUT PARAMETER

       IERROR    Fortran only: Error status (integer).

DESCRIPTION

       Starts  all  communications associated with requests in array_of_requests. A call to  MPI_Startall(count,
       array_of_requests) has the same effect as calls to MPI_Start (&array_of_requests[i]),  executed  for  i=0
       ,..., count-1, in some arbitrary order.

       A  communication  started  with  a  call to MPI_Start or MPI_Startall is completed by a call to MPI_Wait,
       MPI_Test,  or  one  of  the  derived  functions  MPI_Waitany,  MPI_Testany,   MPI_Waitall,   MPI_Testall,
       MPI_Waitsome,  MPI_Testsome  (these  are  described  in  Section  3.7.5  of the MPI-1 Standard, "Multiple
       Completions"). The request becomes inactive after successful completion by such a call.  The  request  is
       not deallocated, and it can be activated anew by another MPI_Start or MPI_Startall call.

       A  persistent  request  is  deallocated  by  a  call to MPI_Request_free (see Section 3.7.3  of the MPI-1
       Standard, "Communication Completion").

       The call to MPI_Request_free can occur at any point in the  program  after  the  persistent  request  was
       created. However, the request will be deallocated only after it becomes inactive. Active receive requests
       should  not  be  freed. Otherwise, it will not be possible to check that the receive has completed. It is
       preferable, in general, to free requests when they are inactive. If  this  rule  is  followed,  then  the
       persistent communication request functions will be invoked in a sequence of the form,

           Create (Start Complete)* Free

       where  *  indicates  zero  or  more  repetitions.  If  the  same  communication object is used in several
       concurrent threads, it is the user's responsibility to coordinate calls so that the correct  sequence  is
       obeyed.

       A  send  operation  initiated  with  MPI_Start can be matched with any receive operation and, likewise, a
       receive operation initiated with MPI_Start can receive messages generated by any send operation.

ERRORS

       Almost all MPI routines return an error value; C routines as  the  value  of  the  function  and  Fortran
       routines in the last argument. C++ functions do not return errors. If the default error handler is set to
       MPI::ERRORS_THROW_EXCEPTIONS,  then  on  error  the  C++  exception  mechanism  will  be used to throw an
       MPI::Exception object.

       Before the error value is returned, the current MPI error handler  is  called.  By  default,  this  error
       handler  aborts  the  MPI  job,  except  for  I/O  function errors. The error handler may be changed with
       MPI_Comm_set_errhandler; the predefined error handler MPI_ERRORS_RETURN may be used to cause error values
       to be returned. Note that MPI does not guarantee that an MPI program can continue past an error.

SEE ALSO

       MPI_Bsend_init
       MPI_Rsend_init
       MPI_Send_init
       MPI_Ssend_init
       MPI_Recv_init
       MPI_Start
       MPI_Request_free

4.1.6                                             Sep 30, 2023                                   MPI_Startall(3)