See attached.  Output below.  Note that the base you get for ranks 0 and 1
is the same, so you need to use the fact that size=0 at rank=0 to know not
to dereference that pointer and expect to be writing into rank 0's memory,
since you will write into rank 1's.

I would probably add "if (size==0) base=NULL;" for good measure.

Jeff

$ mpirun -n 4 ./a.out

query: me=0, them=0, size=0, disp=1, base=0x10bd64000

query: me=0, them=1, size=4, disp=1, base=0x10bd64000

query: me=0, them=2, size=4, disp=1, base=0x10bd64004

query: me=0, them=3, size=4, disp=1, base=0x10bd64008

query: me=0, them=PROC_NULL, size=4, disp=1, base=0x10bd64000

query: me=1, them=0, size=0, disp=1, base=0x102d3b000

query: me=1, them=1, size=4, disp=1, base=0x102d3b000

query: me=1, them=2, size=4, disp=1, base=0x102d3b004

query: me=1, them=3, size=4, disp=1, base=0x102d3b008

query: me=1, them=PROC_NULL, size=4, disp=1, base=0x102d3b000

query: me=2, them=0, size=0, disp=1, base=0x10aac1000

query: me=2, them=1, size=4, disp=1, base=0x10aac1000

query: me=2, them=2, size=4, disp=1, base=0x10aac1004

query: me=2, them=3, size=4, disp=1, base=0x10aac1008

query: me=2, them=PROC_NULL, size=4, disp=1, base=0x10aac1000

query: me=3, them=0, size=0, disp=1, base=0x100fa2000

query: me=3, them=1, size=4, disp=1, base=0x100fa2000

query: me=3, them=2, size=4, disp=1, base=0x100fa2004

query: me=3, them=3, size=4, disp=1, base=0x100fa2008

query: me=3, them=PROC_NULL, size=4, disp=1, base=0x100fa2000

On Thu, Feb 11, 2016 at 8:55 AM, Jeff Hammond <jeff.scie...@gmail.com>
wrote:

>
>
> On Thu, Feb 11, 2016 at 8:46 AM, Nathan Hjelm <hje...@lanl.gov> wrote:
> >
> >
> > On Thu, Feb 11, 2016 at 02:17:40PM +0000, Peter Wind wrote:
> > >    I would add that the present situation is bound to give problems
> for some
> > >    users.
> > >    It is natural to divide an array in segments, each process treating
> its
> > >    own segment, but needing to read adjacent segments too.
> > >    MPI_Win_allocate_shared seems to be designed for this.
> > >    This will work fine as long as no segment as size zero. It can also
> be
> > >    expected that most testing would be done with all segments larger
> than
> > >    zero.
> > >    The document adding "size = 0 is valid", would also make people
> confident
> > >    that it will be consistent for that special case too.
> >
> > Nope, that statement says its ok for a rank to specify that the local
> > shared memory segment is 0 bytes. Nothing more. The standard
> > unfortunately does not define what pointer value is returned for a rank
> > that specifies size = 0. Not sure if the RMA working group intentionally
> > left that undefine... Anyway, Open MPI does not appear to be out of
> > compliance with the standard here.
> >
>
> MPI_Alloc_mem doesn't say what happens if you pass size=0 either.  The RMA
> working group intentionally tries to maintain consistency with the rest of
> the MPI standard whenever possible, so we did not create a new semantic
> here.
>
> MPI_Win_shared_query text includes this:
>
> "If all processes in the group attached to the window specified size = 0,
> then the call returns size = 0 and a baseptr as if MPI_ALLOC_MEM was called
> with size = 0."
>
> >
> > To be safe you should use MPI_Win_shared_query as suggested. You can
> > pass MPI_PROC_NULL as the rank to get the pointer for the first non-zero
> > sized segment in the shared memory window.
>
> Indeed!  I forgot about that.  MPI_Win_shared_query solves this problem
> for the user brilliantly.
>
> Jeff
>
> --
> Jeff Hammond
> jeff.scie...@gmail.com
> http://jeffhammond.github.io/
>



-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
#include <mpi.h>
#include <stdio.h>

/* test zero size segment.
 run on at least 3 cpus
 mpirun -np 4 a.out */

int main(int argc, char** argv)
{
    MPI_Init(NULL, NULL);

    int wsize, wrank;
    MPI_Comm_size(MPI_COMM_WORLD, &wsize);
    MPI_Comm_rank(MPI_COMM_WORLD, &wrank);

    MPI_Comm ncomm = MPI_COMM_NULL;
    MPI_Comm_split_type(MPI_COMM_WORLD, MPI_COMM_TYPE_SHARED, 0, MPI_INFO_NULL, &ncomm);

    MPI_Aint size = (wrank==0) ? 0 : sizeof(int);
    MPI_Win win = MPI_WIN_NULL;
    int * ptr = NULL;
    MPI_Win_allocate_shared(size, 1, MPI_INFO_NULL, MPI_COMM_WORLD, &ptr, &win);

    int nsize, nrank;
    MPI_Comm_size(MPI_COMM_WORLD, &nsize);
    MPI_Comm_rank(MPI_COMM_WORLD, &nrank);

    for (int r=0; r<nsize; r++) {
        MPI_Aint qsize = 0;
        int qdisp = 0;
        void * qbase = NULL;
        MPI_Win_shared_query(win, r, &qsize, &qdisp, &qbase);
        printf("query: me=%d, them=%d, size=%zu, disp=%d, base=%p\n", nrank, r, qsize, qdisp, qbase);
    }
    fflush(stdout);
    MPI_Barrier(MPI_COMM_WORLD);
    {
        MPI_Aint qsize = 0;
        int qdisp = 0;
        void * qbase = NULL;
        MPI_Win_shared_query(win, MPI_PROC_NULL, &qsize, &qdisp, &qbase);
        printf("query: me=%d, them=PROC_NULL, size=%zu, disp=%d, base=%p\n", nrank, qsize, qdisp, qbase);
    }
    fflush(stdout);

    MPI_Win_free(&win);

    MPI_Comm_free(&ncomm);

    MPI_Finalize();
}

Reply via email to