Re: [OMPI users] send and receive buffer the same on root

2010-09-16 Thread Tom Rosmond
The responsible programmer for this code has conceded the point and we will be replacing all offending examples with the MPI_IN_PLACE solution. Thanks for the input. T. Rosmond On Thu, 2010-09-16 at 13:56 -0700, Tim Prince wrote: > On 9/16/2010 9:58 AM, David Zhang wrote: > > It's compiler

Re: [OMPI users] send and receive buffer the same on root

2010-09-16 Thread Tim Prince
On 9/16/2010 9:58 AM, David Zhang wrote: It's compiler specific I think. I've done this with OpenMPI no problem, however on one another cluster with ifort I've gotten error messages about not using MPI_IN_PLACE. So I think if it compiles, it should work fine. On Thu, Sep 16, 2010 at 10:01

Re: [OMPI users] send and receive buffer the same on root

2010-09-16 Thread Richard Treumann
Tony You are depending on luck. The MPI Standard allows the implementation to assume that send and recv buffers are distinct unless MPI_IN_PLACE is used. Any MPI implementation may have more than one algorithm for a given MPI collective communication operation and the policy for switching

Re: [OMPI users] send and receive buffer the same on root

2010-09-16 Thread Jeff Squyres
The description for MPI_GATHERV says (from http://www.mpi-forum.org/docs/mpi22-report/node95.htm#Node95): The specification of counts, types, and displacements should not cause any location on the root to be written more than once. Such a call is erroneous. The ``in place'' option for

Re: [OMPI users] send and receive buffer the same on root

2010-09-16 Thread David Zhang
It's compiler specific I think. I've done this with OpenMPI no problem, however on one another cluster with ifort I've gotten error messages about not using MPI_IN_PLACE. So I think if it compiles, it should work fine. On Thu, Sep 16, 2010 at 10:01 AM, Tom Rosmond wrote:

[OMPI users] send and receive buffer the same on root

2010-09-16 Thread Tom Rosmond
I am working with a Fortran 90 code with many MPI calls like this: call mpi_gatherv(x,nsize(rank+1), mpi_real,x,nsize,nstep,mpi_real,root,mpi_comm_world,mstat) 'x' is allocated on root to be large enough to hold the results of the gather, other arrays and parameters are defined correctly,