PR#3500 (https://github.com/open-mpi/ompi/pull/3500) should fix the
problem. Is not optimal, but it is simple and works in all cases.

  George.


On Tue, May 9, 2017 at 2:39 PM, George Bosilca <bosi...@icl.utk.edu> wrote:

> Please go ahead and open an issue, I will attach the PR once I have the
> core ready. A little later today I think.
>
>   George.
>
>
> On May 9, 2017, at 14:32 , Dahai Guo <dahai....@gmail.com> wrote:
>
> Hi, George:
>
> any progress on it? an issue should be opened in github? or you already
> opened one?
>
> Dahai
>
> On Fri, May 5, 2017 at 1:27 PM, George Bosilca <bosi...@icl.utk.edu>
> wrote:
>
>> Indeed, our current implementation of the MPI_Sendrecv_replace prohibits
>> the use of MPI_ANY_SOURCE. Will work a patch later today.
>>
>>   George.
>>
>>
>> On Fri, May 5, 2017 at 11:49 AM, Dahai Guo <dahai....@gmail.com> wrote:
>>
>>> The following code causes memory fault problem. The initial check shows
>>> that it seemed caused by *ompi_comm_peer_lookup* with MPI_ANY_SOURCE,
>>> which somehow messed up the allocated  temporary buffer used in SendRecv.
>>>
>>> any idea?
>>>
>>> Dahai
>>>
>>> #include <mpi.h>
>>> #include <stdio.h>
>>> #include <stdlib.h>
>>> #include <limits.h>
>>> #include <sys/types.h>
>>> #include <errno.h>
>>> #include <unistd.h>
>>> #include <values.h>
>>>
>>> int main(int argc, char *argv[]) {
>>>
>>>    int          local_rank;
>>>    int          numtask, myrank;
>>>    int          count;
>>>
>>>    MPI_Status   status;
>>>
>>>    long long   *msg_sizes_vec;
>>>    long long   *mpi_buf;
>>>    long long    host_buf[4];
>>>
>>>    int          send_tag;
>>>    int          recv_tag;
>>>    int          malloc_size;
>>>    int          dest;
>>>
>>>     MPI_Init(&argc,&argv);
>>>
>>>     MPI_Comm_rank(MPI_COMM_WORLD, &myrank);
>>>     fprintf(stdout,"my RanK is %d\n",myrank);
>>>
>>>     MPI_Comm_size(MPI_COMM_WORLD, &numtask);
>>>     fprintf(stdout,"Num Task is %d\n",numtask);
>>>
>>>     malloc_size=32;
>>>     count = malloc_size / sizeof(long long);
>>>     dest = (myrank+1)%2;
>>>     fprintf(stdout,"my dest is %d\n",dest);
>>>
>>>
>>>     host_buf[0] = 100 + myrank;
>>>     host_buf[1] = 200 + myrank;
>>>     host_buf[2] = 300 + myrank;
>>>     host_buf[3] = 400 + myrank;
>>>
>>>     fprintf(stdout,"BEFORE %lld %lld %lld %lld
>>> \n",host_buf[0],host_buf[1],host_buf[2],host_buf[3]);
>>>     fflush(stdout);
>>>     fprintf(stdout,"Doing sendrecv_replace with host buffer\n");
>>>     fflush(stdout);
>>>
>>>     MPI_Sendrecv_replace ( host_buf,
>>>                              count,
>>>                      MPI_LONG_LONG,
>>>                               dest,
>>>                             myrank,
>>>                               MPI_ANY_SOURCE,
>>>                               dest,
>>>                     MPI_COMM_WORLD,
>>>                            &status);
>>>
>>>     fprintf(stdout,"Back from doing sendrecv_replace with host
>>> buffer\n");
>>>     fprintf(stdout,"AFTER %lld %lld %lld %lld
>>> \n",host_buf[0],host_buf[1],host_buf[2],host_buf[3]);
>>>     fflush(stdout);
>>>
>>>
>>>     MPI_Finalize();
>>>     exit(0);
>>> }
>>>
>>>
>>> _______________________________________________
>>> devel mailing list
>>> devel@lists.open-mpi.org
>>> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>>>
>>
>>
>> _______________________________________________
>> devel mailing list
>> devel@lists.open-mpi.org
>> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>>
>
> _______________________________________________
> devel mailing list
> devel@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/devel
>
>
>
_______________________________________________
devel mailing list
devel@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/devel

Reply via email to