On May 24, 2012, at 11:22 , Jeff Squyres wrote:

> On May 24, 2012, at 11:10 AM, Lisandro Dalcin wrote:
> 
>>> So I checked them all, and I found SCATTERV, GATHERV, and REDUCE_SCATTER 
>>> all had the issue.  Now fixed on the trunk, and will be in 1.6.1.
>> 
>> Please be careful with REDUCE_SCATTER[_BLOCK] . My understanding of
>> the MPI standard is that the the length of the recvcounts array is the
>> local group size
>> (http://www.mpi-forum.org/docs/mpi22-report/node113.htm#Node113)
> 
> 
> I read that this morning and it made my head hurt.
> 
> I read it to be: reduce the data in the local group, scatter the results to 
> the remote group.
> 
> As such, the reduce COUNT is sum(recvcounts), and is used for the reduction 
> in the local group.  Then use recvcounts to scatter it to the remote group.
> 
> …right?

Right, you reduce locally but you scatter remotely. As such the size of the 
recvcounts buffer is the remote size. As in the local group you do a reduce 
(where every process participate with the same amount of data) you only need a 
total count which in this case is the sum of all recvcounts. This requirement 
is enforced by the fact that the input buffer is of size sum of all recvcounts, 
which make sense only if you know the remote group receives counts.

I don't see much difference with the other collective. The generic behavior is 
that you apply the operation on the local group but the result is moved into 
the remote group.

  george.



> 
> -- 
> Jeff Squyres
> jsquy...@cisco.com
> For corporate legal information go to: 
> http://www.cisco.com/web/about/doing_business/legal/cri/
> 


Reply via email to