Tony 

You are depending on luck. The MPI Standard allows the implementation to 
assume that send and recv buffers are distinct unless MPI_IN_PLACE is 
used.  Any MPI implementation may have more than one algorithm for a given 
MPI collective communication operation and the policy for switching 
algorithm is not documented.

It is entirely possible that something like going from 32 to 64 processes 
or changing interconnects will cause a different algorithm to be used. 
Applying a patch could also cause the algorithm to be changed.

In theory one algorithm could let you get away with the violation while 
another trips on it and a change you do not even realize you made cause 
bad answers to show up. Perhaps some algorithm uses space in the receive 
buffer as scratch.

Standards compliant code is safer.

                      Dick 


Dick Treumann  -  MPI Team 
IBM Systems & Technology Group
Dept X2ZA / MS P963 -- 2455 South Road -- Poughkeepsie, NY 12601
Tele (845) 433-7846         Fax (845) 433-8363




From:
Tom Rosmond <rosm...@reachone.com>
To:
us...@open-mpi.org
List-Post: users@lists.open-mpi.org
Date:
09/16/2010 12:05 PM
Subject:
[OMPI users] send and receive buffer the same  on root
Sent by:
users-boun...@open-mpi.org



I am working with a Fortran 90 code with many MPI calls like this:

call mpi_gatherv(x,nsize(rank+1),
     mpi_real,x,nsize,nstep,mpi_real,root,mpi_comm_world,mstat)

'x' is allocated on root to be large enough to hold the results of the
gather, other arrays and parameters are defined correctly, and the code
runs as it should.  However, I am concerned that having the same send
and receive buffer on root is a violation of the MPI standard.  Am I
correct?  I am aware of the MPI_IN_PLACE feature that can be used in
this situation, by defining it as the send buffer at root. 

The fact that the code as written seems to work on most system we run on
(some with OpenMPI, some with proprietary MPI's) indicates that in spite
of the standard, implementations allow it.  Is this correct, or are we
just lucky.

T. Rosmond



_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to