Re: [OMPI users] Problems with GATHERV on one process
Excellent. Thanks. -Ken > -Original Message- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Jeff Squyres > Sent: Thursday, December 13, 2007 6:02 AM > To: Open MPI Users > Subject: Re: [OMPI users] Problems with GATHERV on one process > > Correct. Here's the original commit that fixed the problem: > > https://svn.open-mpi.org/trac/ompi/changeset/16360 > > And the commit to the v1.2 branch: > > https://svn.open-mpi.org/trac/ompi/changeset/16519 > > > On Dec 12, 2007, at 2:43 PM, Moreland, Kenneth wrote: > > > Thanks Tim. I've since noticed similar problems with MPI_Allgatherv > > and > > MPI_Scatterv. I'm guessing they are all related. Do you happen to > > know > > if those are being fixed as well? > > > > -Ken > > > >> -Original Message- > >> From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] > > On > >> Behalf Of Tim Mattox > >> Sent: Tuesday, December 11, 2007 3:34 PM > >> To: Open MPI Users > >> Subject: Re: [OMPI users] Problems with GATHERV on one process > >> > >> Hello Ken, > >> This is a known bug, which is fixed in the upcoming 1.2.5 release. > >> We > >> expect 1.2.5 > >> to come out very soon. We should have a new release candidate for > > 1.2.5 > >> posted > >> by tomorrow. > >> > >> See these tickets about the bug if you care to look: > >> https://svn.open-mpi.org/trac/ompi/ticket/1166 > >> https://svn.open-mpi.org/trac/ompi/ticket/1157 > >> > >> On Dec 11, 2007 2:48 PM, Moreland, Kenneth <kmo...@sandia.gov> wrote: > >>> I recently ran into a problem with GATHERV while running some > > randomized > >>> tests on my MPI code. The problem seems to occur when running > >>> MPI_Gatherv with a displacement on a communicator with a single > > process. > >>> The code listed below exercises this errant behavior. I have tried > > it > >>> on OpenMPI 1.1.2 and 1.2.4. > >>> > >>> Granted, this is not a situation that one would normally run into in > > a > >>> real application, but I just wanted to check to make sure I was not > >>> doing anything wrong. > >>> > >>> -Ken > >>> > >>> > >>> > >>> #include > >>> > >>> #include > >>> #include > >>> > >>> int main(int argc, char **argv) > >>> { > >>> int rank; > >>> MPI_Comm smallComm; > >>> int senddata[4], recvdata[4], length, offset; > >>> > >>> MPI_Init(, ); > >>> > >>> MPI_Comm_rank(MPI_COMM_WORLD, ); > >>> > >>> // Split up into communicators of size 1. > >>> MPI_Comm_split(MPI_COMM_WORLD, rank, 0, ); > >>> > >>> // Now try to do a gatherv. > >>> senddata[0] = 5; senddata[1] = 6; senddata[2] = 7; senddata[3] = > > 8; > >>> recvdata[0] = 0; recvdata[1] = 0; recvdata[2] = 0; recvdata[3] = > > 0; > >>> length = 3; > >>> offset = 1; > >>> MPI_Gatherv(senddata, length, MPI_INT, > >>> recvdata, , , MPI_INT, 0, smallComm); > >>> if (senddata[0] != recvdata[offset]) > >>>{ > >>>printf("%d: %d != %d?\n", rank, senddata[0], recvdata[offset]); > >>>} > >>> else > >>>{ > >>>printf("%d: Everything OK.\n", rank); > >>>} > >>> > >>> return 0; > >>> } > >>> > >>> Kenneth Moreland > >>>*** Sandia National Laboratories > >>> *** > >>> *** *** *** email: kmo...@sandia.gov > >>> ** *** ** phone: (505) 844-8919 > >>>*** fax: (505) 845-0833 > >>> > >>> > >>> > >>> ___ > >>> users mailing list > >>> us...@open-mpi.org > >>> http://www.open-mpi.org/mailman/listinfo.cgi/users > >>> > >> > >> -- > >> Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/ > >> tmat...@gmail.com || timat...@open-mpi.org > >>I'm a bright... http://www.the-brights.net/ > >> ___ > >> users mailing list > >> us...@open-mpi.org > >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > > > > > ___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > -- > Jeff Squyres > Cisco Systems > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] Problems with GATHERV on one process
Thanks Tim. I've since noticed similar problems with MPI_Allgatherv and MPI_Scatterv. I'm guessing they are all related. Do you happen to know if those are being fixed as well? -Ken > -Original Message- > From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On > Behalf Of Tim Mattox > Sent: Tuesday, December 11, 2007 3:34 PM > To: Open MPI Users > Subject: Re: [OMPI users] Problems with GATHERV on one process > > Hello Ken, > This is a known bug, which is fixed in the upcoming 1.2.5 release. We > expect 1.2.5 > to come out very soon. We should have a new release candidate for 1.2.5 > posted > by tomorrow. > > See these tickets about the bug if you care to look: > https://svn.open-mpi.org/trac/ompi/ticket/1166 > https://svn.open-mpi.org/trac/ompi/ticket/1157 > > On Dec 11, 2007 2:48 PM, Moreland, Kenneth <kmo...@sandia.gov> wrote: > > I recently ran into a problem with GATHERV while running some randomized > > tests on my MPI code. The problem seems to occur when running > > MPI_Gatherv with a displacement on a communicator with a single process. > > The code listed below exercises this errant behavior. I have tried it > > on OpenMPI 1.1.2 and 1.2.4. > > > > Granted, this is not a situation that one would normally run into in a > > real application, but I just wanted to check to make sure I was not > > doing anything wrong. > > > > -Ken > > > > > > > > #include > > > > #include > > #include > > > > int main(int argc, char **argv) > > { > > int rank; > > MPI_Comm smallComm; > > int senddata[4], recvdata[4], length, offset; > > > > MPI_Init(, ); > > > > MPI_Comm_rank(MPI_COMM_WORLD, ); > > > > // Split up into communicators of size 1. > > MPI_Comm_split(MPI_COMM_WORLD, rank, 0, ); > > > > // Now try to do a gatherv. > > senddata[0] = 5; senddata[1] = 6; senddata[2] = 7; senddata[3] = 8; > > recvdata[0] = 0; recvdata[1] = 0; recvdata[2] = 0; recvdata[3] = 0; > > length = 3; > > offset = 1; > > MPI_Gatherv(senddata, length, MPI_INT, > > recvdata, , , MPI_INT, 0, smallComm); > > if (senddata[0] != recvdata[offset]) > > { > > printf("%d: %d != %d?\n", rank, senddata[0], recvdata[offset]); > > } > > else > > { > > printf("%d: Everything OK.\n", rank); > > } > > > > return 0; > > } > > > > Kenneth Moreland > > *** Sandia National Laboratories > > *** > > *** *** *** email: kmo...@sandia.gov > > ** *** ** phone: (505) 844-8919 > > *** fax: (505) 845-0833 > > > > > > > > ___ > > users mailing list > > us...@open-mpi.org > > http://www.open-mpi.org/mailman/listinfo.cgi/users > > > > -- > Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/ > tmat...@gmail.com || timat...@open-mpi.org > I'm a bright... http://www.the-brights.net/ > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users
Re: [OMPI users] Problems with GATHERV on one process
Hello Ken, This is a known bug, which is fixed in the upcoming 1.2.5 release. We expect 1.2.5 to come out very soon. We should have a new release candidate for 1.2.5 posted by tomorrow. See these tickets about the bug if you care to look: https://svn.open-mpi.org/trac/ompi/ticket/1166 https://svn.open-mpi.org/trac/ompi/ticket/1157 On Dec 11, 2007 2:48 PM, Moreland, Kennethwrote: > I recently ran into a problem with GATHERV while running some randomized > tests on my MPI code. The problem seems to occur when running > MPI_Gatherv with a displacement on a communicator with a single process. > The code listed below exercises this errant behavior. I have tried it > on OpenMPI 1.1.2 and 1.2.4. > > Granted, this is not a situation that one would normally run into in a > real application, but I just wanted to check to make sure I was not > doing anything wrong. > > -Ken > > > > #include > > #include > #include > > int main(int argc, char **argv) > { > int rank; > MPI_Comm smallComm; > int senddata[4], recvdata[4], length, offset; > > MPI_Init(, ); > > MPI_Comm_rank(MPI_COMM_WORLD, ); > > // Split up into communicators of size 1. > MPI_Comm_split(MPI_COMM_WORLD, rank, 0, ); > > // Now try to do a gatherv. > senddata[0] = 5; senddata[1] = 6; senddata[2] = 7; senddata[3] = 8; > recvdata[0] = 0; recvdata[1] = 0; recvdata[2] = 0; recvdata[3] = 0; > length = 3; > offset = 1; > MPI_Gatherv(senddata, length, MPI_INT, > recvdata, , , MPI_INT, 0, smallComm); > if (senddata[0] != recvdata[offset]) > { > printf("%d: %d != %d?\n", rank, senddata[0], recvdata[offset]); > } > else > { > printf("%d: Everything OK.\n", rank); > } > > return 0; > } > > Kenneth Moreland > *** Sandia National Laboratories > *** > *** *** *** email: kmo...@sandia.gov > ** *** ** phone: (505) 844-8919 > *** fax: (505) 845-0833 > > > > ___ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users > -- Tim Mattox, Ph.D. - http://homepage.mac.com/tmattox/ tmat...@gmail.com || timat...@open-mpi.org I'm a bright... http://www.the-brights.net/
[OMPI users] Problems with GATHERV on one process
I recently ran into a problem with GATHERV while running some randomized tests on my MPI code. The problem seems to occur when running MPI_Gatherv with a displacement on a communicator with a single process. The code listed below exercises this errant behavior. I have tried it on OpenMPI 1.1.2 and 1.2.4. Granted, this is not a situation that one would normally run into in a real application, but I just wanted to check to make sure I was not doing anything wrong. -Ken #include #include #include int main(int argc, char **argv) { int rank; MPI_Comm smallComm; int senddata[4], recvdata[4], length, offset; MPI_Init(, ); MPI_Comm_rank(MPI_COMM_WORLD, ); // Split up into communicators of size 1. MPI_Comm_split(MPI_COMM_WORLD, rank, 0, ); // Now try to do a gatherv. senddata[0] = 5; senddata[1] = 6; senddata[2] = 7; senddata[3] = 8; recvdata[0] = 0; recvdata[1] = 0; recvdata[2] = 0; recvdata[3] = 0; length = 3; offset = 1; MPI_Gatherv(senddata, length, MPI_INT, recvdata, , , MPI_INT, 0, smallComm); if (senddata[0] != recvdata[offset]) { printf("%d: %d != %d?\n", rank, senddata[0], recvdata[offset]); } else { printf("%d: Everything OK.\n", rank); } return 0; } Kenneth Moreland *** Sandia National Laboratories *** *** *** *** email: kmo...@sandia.gov ** *** ** phone: (505) 844-8919 *** fax: (505) 845-0833