I would add that the present situation is bound to give problems for some 
users. 

It is natural to divide an array in segments, each process treating its own 
segment, but needing to read adjacent segments too. 
MPI_Win_allocate_shared seems to be designed for this. 
This will work fine as long as no segment as size zero. It can also be expected 
that most testing would be done with all segments larger than zero. 
The document adding "size = 0 is valid", would also make people confident that 
it will be consistent for that special case too. 
Then long down the road of the development of a particular code some special 
case will use a segment of size zero, and it will be hard to trace back this 
error to the mpi library. 

Peter 


----- Original Message -----



Yes, that is what I meant. 

Enclosed is a C example. 
The point is that the code would logically make sense for task 0, but since it 
asks for a segment of size=0, it only gets a null pointer, which cannot be used 
to access the shared parts. 

Peter 

----- Original Message -----

<blockquote>
I think Peter's point is that if 
- the windows uses contiguous memory 
*and* 
- all tasks knows how much memory was allocated by all other tasks in the 
window 
then it could/should be possible to get rid of MPI_Win_shared_query 

that is likely true if no task allocates zero byte. 
now, if a task allocates zero byte, MPI_Win_allocate_shared could return a null 
pointer and hence makes MPI_Win_shared_query usage mandatory. 

in his example, task 0 allocates zero bytes, so he was expecting the returned 
pointer on task zero points to the memory allocated by task 1. 

if "may enable" should be read as "does enable", then returning a null pointer 
can be seen as a bug. 
if "may enable" can be read as "does not always enable", the returning a null 
pointer is compliant with the standard. 

I am clearly not good at reading/interpreting the standard, so using 
MPI_Win_shared_query is my recommended way to get it work. 
(feel free to call it "bulletproof", "overkill", or even "right") 

Cheers, 

Gilles 

On Thursday, February 11, 2016, Jeff Hammond < jeff.scie...@gmail.com > wrote: 

<blockquote>



On Wed, Feb 10, 2016 at 8:44 AM, Peter Wind < peter.w...@met.no > wrote: 

<blockquote>

I agree that in practice the best practice would be to use Win_shared_query. 

Still I am confused by this part in the documentation: 
"The allocated memory is contiguous across process ranks unless the info key 
alloc_shared_noncontig is specified. Contiguous across process ranks means that 
the first address in the memory segment of process i is consecutive with the 
last address in the memory segment of process i - 1. This may enable the user 
to calculate remote address offsets with local information only." 

Isn't this an encouragement to use the pointer of Win_allocate_shared directly? 





No, it is not. Win_allocate_shared only gives you the pointer to the portion of 
the allocation that is owned by the calling process. If you want to access the 
whole slab, call Win_shared_query(..,rank=0,..) and use the resulting baseptr. 

I attempted to modify your code to be more correct, but I don't know enough 
Fortran to get it right. If you can parse C examples, I'll provide some of 
those. 

Jeff 

<blockquote>


Peter 



<blockquote>

I don't know about bulletproof, but Win_shared_query is the *only* valid way to 
get the addresses of memory in other processes associated with a window. 

The default for Win_allocate_shared is contiguous memory, but it can and likely 
will be mapped differently into each process, in which case only relative 
offsets are transferrable. 

Jeff 

On Wed, Feb 10, 2016 at 4:19 AM, Gilles Gouaillardet < 
gilles.gouaillar...@gmail.com > wrote: 

<blockquote>
Peter, 

The bulletproof way is to use MPI_Win_shared_query after 
MPI_Win_allocate_shared. 
I do not know if current behavior is a bug or a feature... 

Cheers, 

Gilles 


On Wednesday, February 10, 2016, Peter Wind < peter.w...@met.no > wrote: 

<blockquote>
Hi, 

Under fortran, MPI_Win_allocate_shared is called with a window size of zero for 
some processes. 
The output pointer is then not valid for these processes (null pointer). 
Did I understood this wrongly? shouldn't the pointers be contiguous, so that 
for a zero sized window, the pointer should point to the start of the segment 
of the next rank? 
The documentation explicitly specifies "size = 0 is valid". 

Attached a small code, where rank=0 allocate a window of size zero. All the 
other ranks get valid pointers, except rank 0. 

Best regards, 
Peter 
_______________________________________________ 
users mailing list 
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/02/28485.php 

</blockquote>


_______________________________________________ 
users mailing list 
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/02/28493.php 

</blockquote>




-- 
Jeff Hammond 
jeff.scie...@gmail.com 
http://jeffhammond.github.io/ 

_______________________________________________ 
users mailing list 
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/02/28496.php 

</blockquote>



_______________________________________________ 
users mailing list 
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/02/28497.php 

</blockquote>




-- 
Jeff Hammond 
jeff.scie...@gmail.com 
http://jeffhammond.github.io/ 

</blockquote>


_______________________________________________ 
users mailing list 
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/02/28503.php 
</blockquote>



_______________________________________________ 
users mailing list 
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users 
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/02/28504.php 
</blockquote>


Reply via email to