On May 24, 2012, at 23:48 , Dave Goodell wrote:

> On May 24, 2012, at 10:34 PM CDT, George Bosilca wrote:
> 
>> On May 24, 2012, at 23:18, Dave Goodell <good...@mcs.anl.gov> wrote:
>> 
>>> So I take back my prior "right".  Upon further inspection of the text and 
>>> the MPICH2 code I believe it to be true that the number of the elements in 
>>> the recvcounts array must be equal to the size of the LOCAL group.
>> 
>> This is quite illogical, but it will not be the first time the standard is 
>> lacking some. So, if I understand you correctly, in the case of an 
>> intercommunicator a process doesn't know how much data it has to reduce, at 
>> least not until it receives the array of recvcounts from the remote group. 
>> Weird!
> 
> No, it knows because of the restriction that $sum_i^n{recvcounts[i]}$ yields 
> the same sum in each group.

I should have read the entire paragraph of the standard … including the 
rationale. Indeed, the rationale describes exactly what you mentioned.

Apparently the figure 12 on the following [MPI Forum blessed] link is supposed 
to clarify any potential misunderstanding regarding the reduce_scatter. Count 
how many elements are on each side of the intercommunicator ;)

  george.

> The way it's implemented in MPICH2, and the way that makes this make a lot 
> more sense to me, is that you first do intercommunicator reductions to 
> temporary buffers on rank 0 in each group.  Then rank 0 scatters within the 
> local group.  The way I had been thinking about it was to do a local 
> reduction followed by an intercomm scatter, but that isn't what the standard 
> is saying, AFAICS.


Reply via email to