I think Peter's point is that if
- the windows uses contiguous memory
*and*
- all tasks knows how much memory was allocated by all other tasks in the
window
then it could/should be possible to get rid of MPI_Win_shared_query

that is likely true if no task allocates zero byte.
now, if a task allocates zero byte, MPI_Win_allocate_shared could return a
null pointer and hence makes MPI_Win_shared_query usage mandatory.

in his example, task 0 allocates zero bytes, so he was expecting the
returned pointer on task zero points to the memory allocated by task 1.

if "may enable" should be read as "does enable", then returning a null
pointer can be seen as a bug.
if "may enable" can be read as "does not always enable", the returning a
null pointer is compliant with the standard.

I am clearly not good at reading/interpreting the standard, so using
MPI_Win_shared_query is my recommended way to get it work.
(feel free to call it "bulletproof",  "overkill", or even "right")

Cheers,

Gilles

On Thursday, February 11, 2016, Jeff Hammond <jeff.scie...@gmail.com> wrote:

>
>
> On Wed, Feb 10, 2016 at 8:44 AM, Peter Wind <peter.w...@met.no
> <javascript:_e(%7B%7D,'cvml','peter.w...@met.no');>> wrote:
>
>> I agree that in practice the best practice would be to use
>> Win_shared_query.
>>
>> Still I am confused by this part in the documentation:
>> "The allocated memory is contiguous across process ranks unless the info
>> key *alloc_shared_noncontig* is specified. Contiguous across process
>> ranks means that the first address in the memory segment of process i is
>> consecutive with the last address in the memory segment of process i - 1.
>> This may enable the user to calculate remote address offsets with local
>> information only."
>>
>> Isn't this an encouragement to use the pointer of Win_allocate_shared
>> directly?
>>
>>
> No, it is not.  Win_allocate_shared only gives you the pointer to the
> portion of the allocation that is owned by the calling process.  If you
> want to access the whole slab, call Win_shared_query(..,rank=0,..) and use
> the resulting baseptr.
>
> I attempted to modify your code to be more correct, but I don't know
> enough Fortran to get it right.  If you can parse C examples, I'll provide
> some of those.
>
> Jeff
>
>
>> Peter
>>
>> ------------------------------
>>
>> I don't know about bulletproof, but Win_shared_query is the *only* valid
>> way to get the addresses of memory in other processes associated with a
>> window.
>>
>> The default for Win_allocate_shared is contiguous memory, but it can and
>> likely will be mapped differently into each process, in which case only
>> relative offsets are transferrable.
>>
>> Jeff
>>
>> On Wed, Feb 10, 2016 at 4:19 AM, Gilles Gouaillardet <
>> gilles.gouaillar...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','gilles.gouaillar...@gmail.com');>> wrote:
>>
>>> Peter,
>>>
>>> The bulletproof way is to use MPI_Win_shared_query after
>>> MPI_Win_allocate_shared.
>>> I do not know if current behavior is a bug or a feature...
>>>
>>> Cheers,
>>>
>>> Gilles
>>>
>>>
>>> On Wednesday, February 10, 2016, Peter Wind <peter.w...@met.no
>>> <javascript:_e(%7B%7D,'cvml','peter.w...@met.no');>> wrote:
>>>
>>>> Hi,
>>>>
>>>> Under fortran, MPI_Win_allocate_shared is called with a window size of
>>>> zero for some processes.
>>>> The output pointer is then not valid for these processes (null pointer).
>>>> Did I understood this wrongly? shouldn't the pointers be contiguous, so
>>>> that for a zero sized window, the pointer should point to the start of the
>>>> segment of the next rank?
>>>> The documentation explicitly specifies "size = 0 is valid".
>>>>
>>>> Attached a small code, where rank=0 allocate a window of size zero. All
>>>> the other ranks get valid pointers, except rank 0.
>>>>
>>>> Best regards,
>>>> Peter
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2016/02/28485.php
>>>>
>>>
>>> _______________________________________________
>>> users mailing list
>>> us...@open-mpi.org <javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>> Link to this post:
>>> http://www.open-mpi.org/community/lists/users/2016/02/28493.php
>>>
>>
>>
>>
>> --
>> Jeff Hammond
>> jeff.scie...@gmail.com
>> <javascript:_e(%7B%7D,'cvml','jeff.scie...@gmail.com');>
>> http://jeffhammond.github.io/
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/02/28496.php
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org <javascript:_e(%7B%7D,'cvml','us...@open-mpi.org');>
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2016/02/28497.php
>>
>
>
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> <javascript:_e(%7B%7D,'cvml','jeff.scie...@gmail.com');>
> http://jeffhammond.github.io/
>

Reply via email to