Re: [OMPI users] MPI_Ialltoallv

2018-07-09 Thread Stanfield, Clyde
e/radiantsolutions-linkedin-wide.png] <https://www.linkedin.com/company/radiant-solutions/> From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Gilles Gouaillardet Sent: Friday, July 06, 2018 11:16 AM To: Open MPI Users Subject: Re: [OMPI users] MPI_Ialltoallv Clyde, thanks

Re: [OMPI users] MPI_Ialltoallv

2018-07-06 Thread Stanfield, Clyde
via users Sent: Friday, July 06, 2018 10:57 AM To: Open MPI Users Cc: Nathan Hjelm Subject: Re: [OMPI users] MPI_Ialltoallv No, thats a bug. Please open an issue on github and we will fix it shortly. Thanks for reporting this issue. -Nathan > On Jul 6, 2018, at 8:08 AM, Stanfield, Cl

Re: [OMPI users] MPI_Ialltoallv

2018-07-06 Thread Gilles Gouaillardet
Clyde, thanks for reporting the issue. Can you please give the attached patch a try ? Cheers, Gilles FWIW, the nbc module was not initially specific to Open MPI, and hence used standard MPI subroutines. In this case, we can avoid the issue by calling internal Open MPI subroutines. This is an

Re: [OMPI users] MPI_Ialltoallv

2018-07-06 Thread Nathan Hjelm via users
No, thats a bug. Please open an issue on github and we will fix it shortly. Thanks for reporting this issue. -Nathan > On Jul 6, 2018, at 8:08 AM, Stanfield, Clyde > wrote: > > We are using MPI_Ialltoallv for an image processing algorithm. When doing > this we pass in an MPI_Type_contiguous

[OMPI users] MPI_Ialltoallv

2018-07-06 Thread Stanfield, Clyde
We are using MPI_Ialltoallv for an image processing algorithm. When doing this we pass in an MPI_Type_contiguous with an MPI_Datatype of MPI_C_FLOAT_COMPLEX which ends up being the size of multiple rows of the image (based on the number of nodes used for distribution). In addition sendcounts,