No clarification necessary. Standard is not user guide. Semantics are clear
from what is defined. Users who don't like the interface can write a
library that does what they want.
Jeff
On Thursday, February 11, 2016, Nathan Hjelm wrote:
>
> I should also say that I think this
Indeed, I ran with MPICH. But I like OpenMPI's choice better here, which is
why I said that I would explicitly set the pointer to bull when size is
zero.
Jeff
On Thursday, February 11, 2016, Nathan Hjelm wrote:
>
> Jeff probably ran with MPICH. Open MPI's are consistent with
I should also say that I think this is something that may be worth
clarifying in the standard. Either semantic is fine with me but there is
no reason to change the behavior if it does not violate the standard.
-Nathan
On Thu, Feb 11, 2016 at 01:35:28PM -0700, Nathan Hjelm wrote:
>
> Jeff
Jeff probably ran with MPICH. Open MPI's are consistent with our choice
of definition for size=0:
query: me=1, them=0, size=0, disp=1, base=0x0
query: me=1, them=1, size=4, disp=1, base=0x1097e30f8
query: me=1, them=2, size=4, disp=1, base=0x1097e30fc
query: me=1, them=3, size=4, disp=1,
You can be right semantically. But also the sentence "the first address in the
memory segment of process i is consecutive with the last address in the memory
segment of process i - 1" is not easy to interpret correctly for a zero size
segment.
There may be good reasons not to allocate the
Thanks Jeff, that was an interesting result. The pointers are here well
defined, also for the zero size segment.
However I can't reproduce your output. I still get null pointers (output
below).
(I tried both 1.8.5 and 1.10.2 versions)
What could be the difference?
Peter
mpirun -np 4 a.out
See attached. Output below. Note that the base you get for ranks 0 and 1
is the same, so you need to use the fact that size=0 at rank=0 to know not
to dereference that pointer and expect to be writing into rank 0's memory,
since you will write into rank 1's.
I would probably add "if (size==0)
On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm wrote:
>
>
> On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
> >I would add that the present situation is bound to give problems for
some
> >users.
> >It is natural to divide an array in segments, each process
On Thu, Feb 11, 2016 at 02:17:40PM +, Peter Wind wrote:
>I would add that the present situation is bound to give problems for some
>users.
>It is natural to divide an array in segments, each process treating its
>own segment, but needing to read adjacent segments too.
>
I would add that the present situation is bound to give problems for some
users.
It is natural to divide an array in segments, each process treating its own
segment, but needing to read adjacent segments too.
MPI_Win_allocate_shared seems to be designed for this.
This will work fine as long
Yes, that is what I meant.
Enclosed is a C example.
The point is that the code would logically make sense for task 0, but since it
asks for a segment of size=0, it only gets a null pointer, which cannot be used
to access the shared parts.
Peter
- Original Message -
> I think
I think Peter's point is that if
- the windows uses contiguous memory
*and*
- all tasks knows how much memory was allocated by all other tasks in the
window
then it could/should be possible to get rid of MPI_Win_shared_query
that is likely true if no task allocates zero byte.
now, if a task
On Wed, Feb 10, 2016 at 8:44 AM, Peter Wind wrote:
> I agree that in practice the best practice would be to use
> Win_shared_query.
>
> Still I am confused by this part in the documentation:
> "The allocated memory is contiguous across process ranks unless the info
> key
I agree that in practice the best practice would be to use Win_shared_query.
Still I am confused by this part in the documentation:
"The allocated memory is contiguous across process ranks unless the info key
alloc_shared_noncontig is specified. Contiguous across process ranks means that
the
I don't know about bulletproof, but Win_shared_query is the *only* valid
way to get the addresses of memory in other processes associated with a
window.
The default for Win_allocate_shared is contiguous memory, but it can and
likely will be mapped differently into each process, in which case only
Peter,
The bulletproof way is to use MPI_Win_shared_query after
MPI_Win_allocate_shared.
I do not know if current behavior is a bug or a feature...
Cheers,
Gilles
On Wednesday, February 10, 2016, Peter Wind wrote:
> Hi,
>
> Under fortran, MPI_Win_allocate_shared is called
Sorry for that, here is the attachement!
Peter
- Original Message -
> Peter --
>
> Somewhere along the way, your attachment got lost. Could you re-send?
>
> Thanks.
>
>
> > On Feb 10, 2016, at 5:56 AM, Peter Wind wrote:
> >
> > Hi,
> >
> > Under fortran,
- Original Message -
> Peter --
>
> Somewhere along the way, your attachment got lost. Could you re-send?
>
> Thanks.
>
>
> > On Feb 10, 2016, at 5:56 AM, Peter Wind wrote:
> >
> > Hi,
> >
> > Under fortran, MPI_Win_allocate_shared is called with a window size
Peter --
Somewhere along the way, your attachment got lost. Could you re-send?
Thanks.
> On Feb 10, 2016, at 5:56 AM, Peter Wind wrote:
>
> Hi,
>
> Under fortran, MPI_Win_allocate_shared is called with a window size of zero
> for some processes.
> The output pointer is
Hi,
Under fortran, MPI_Win_allocate_shared is called with a window size of zero for
some processes.
The output pointer is then not valid for these processes (null pointer).
Did I understood this wrongly? shouldn't the pointers be contiguous, so that
for a zero sized window, the pointer should
20 matches
Mail list logo