Re: globalValue in parallel
Looking into the rest of the FiPy source, we're already calling allgather(sendobj) in several places, and rarely calling allgather(sendobj, recvobj). To preserve the existing function calls (all of which are lower-case) and mess with the code the least, removing the recvobj argument appears to be the right call after all. Working on the PR. Trevor From: fipy-boun...@nist.gov on behalf of Guyer, Jonathan E. Dr. (Fed) Sent: Wednesday, April 27, 2016 4:39:05 PM To: FIPY Subject: Re: globalValue in parallel It sounds like you're volunteering to put together the pull request with appropriate tests > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) > wrote: > > The mpi4py commit mentions that the receive object is no longer needed for > the lower-case form of the commands. Browsing the full source shows that the > upper-case commands retain both the send and receive objects. To avoid > deviating too far from the MPI standard, I'd like to suggest changing the > case (Allgather instead of allgather), rather than dropping buffers, in our > mpi4pyCommWrapper.py. > > Trevor > > > > From: fipy-boun...@nist.gov on behalf of Guyer, > Jonathan E. Dr. (Fed) > Sent: Wednesday, April 27, 2016 3:53:39 PM > To: FIPY > Subject: Re: globalValue in parallel > > It looks like 'recvobj' was removed from mpi4py about two years ago: > > https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602 > > but I'm not sure when it made it into the released version. > > > It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py to > remove the 'recvobj' argument. > > > We'll do some tests and push a fix as soon as possible. Thanks for alerting > us to the issue. > > Filed as https://github.com/usnistgov/fipy/issues/491 > > >> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman >> wrote: >> >> I built the trilinos-capable version of fipy. It seems to work for serial >> (even for a non-trivial case), but I am getting errors with more than one >> processor with a simple call to globalValue(), which I was trying to use to >> make a plot by gathering the results to procID==0 >> >> I used the latest git version of mpi4py and trilinos. Am I doing something >> wrong (is there a different preferred way to gather things to a single >> processor to save or make plots?) or do I need to use a specific version of >> these packages and rebuild? It seems the function is expecting something >> with a different interface or call structure. >> >> Kris >> >> python test.py >> hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1.] >> >> `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py >> hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1.] >> >> --> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py >> hello from 1 out of 2 >> Traceback (most recent call last): >> File "test.py", line 6, in >>print 'hello from',fp.tools.parallel.procID,'out >> of',fp.tools.parallel.Nproc,p.globalValue >> File >> "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py", >> line 163, in globalValue >>self.mesh._globalNonOverlappingCellIDs) >> File >> "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py", >> li
Re: globalValue in parallel
It sounds like you're volunteering to put together the pull request with appropriate tests > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) > wrote: > > The mpi4py commit mentions that the receive object is no longer needed for > the lower-case form of the commands. Browsing the full source shows that the > upper-case commands retain both the send and receive objects. To avoid > deviating too far from the MPI standard, I'd like to suggest changing the > case (Allgather instead of allgather), rather than dropping buffers, in our > mpi4pyCommWrapper.py. > > Trevor > > > > From: fipy-boun...@nist.gov on behalf of Guyer, > Jonathan E. Dr. (Fed) > Sent: Wednesday, April 27, 2016 3:53:39 PM > To: FIPY > Subject: Re: globalValue in parallel > > It looks like 'recvobj' was removed from mpi4py about two years ago: > > https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602 > > but I'm not sure when it made it into the released version. > > > It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py to > remove the 'recvobj' argument. > > > We'll do some tests and push a fix as soon as possible. Thanks for alerting > us to the issue. > > Filed as https://github.com/usnistgov/fipy/issues/491 > > >> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman >> wrote: >> >> I built the trilinos-capable version of fipy. It seems to work for serial >> (even for a non-trivial case), but I am getting errors with more than one >> processor with a simple call to globalValue(), which I was trying to use to >> make a plot by gathering the results to procID==0 >> >> I used the latest git version of mpi4py and trilinos. Am I doing something >> wrong (is there a different preferred way to gather things to a single >> processor to save or make plots?) or do I need to use a specific version of >> these packages and rebuild? It seems the function is expecting something >> with a different interface or call structure. >> >> Kris >> >> python test.py >> hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1.] >> >> `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py >> hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. >> 1. 1.] >> >> --> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py >> hello from 1 out of 2 >> Traceback (most recent call last): >> File "test.py", line 6, in >>print 'hello from',fp.tools.parallel.procID,'out >> of',fp.tools.parallel.Nproc,p.globalValue >> File >> "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py", >> line 163, in globalValue >>self.mesh._globalNonOverlappingCellIDs) >> File >> "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py", >> line 171, in _getGlobalValue >>globalIDs = >> numerix.concatenate(self.mesh.communicator.allgather(globalIDs)) >> File >> "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py", >> line 75, in allgather >>return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj) >> File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather >> (src/mpi4py.MPI.c:109141) >> TypeError: allgather() got an unexpected keyword argument 'recvobj' >> hello from 0 out of 2 >> Tracebac
Re: globalValue in parallel
The mpi4py commit mentions that the receive object is no longer needed for the lower-case form of the commands. Browsing the full source shows that the upper-case commands retain both the send and receive objects. To avoid deviating too far from the MPI standard, I'd like to suggest changing the case (Allgather instead of allgather), rather than dropping buffers, in our mpi4pyCommWrapper.py. Trevor From: fipy-boun...@nist.gov on behalf of Guyer, Jonathan E. Dr. (Fed) Sent: Wednesday, April 27, 2016 3:53:39 PM To: FIPY Subject: Re: globalValue in parallel It looks like 'recvobj' was removed from mpi4py about two years ago: https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602 but I'm not sure when it made it into the released version. It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py to remove the 'recvobj' argument. We'll do some tests and push a fix as soon as possible. Thanks for alerting us to the issue. Filed as https://github.com/usnistgov/fipy/issues/491 > On Apr 27, 2016, at 2:23 PM, Kris Kuhlman > wrote: > > I built the trilinos-capable version of fipy. It seems to work for serial > (even for a non-trivial case), but I am getting errors with more than one > processor with a simple call to globalValue(), which I was trying to use to > make a plot by gathering the results to procID==0 > > I used the latest git version of mpi4py and trilinos. Am I doing something > wrong (is there a different preferred way to gather things to a single > processor to save or make plots?) or do I need to use a specific version of > these packages and rebuild? It seems the function is expecting something > with a different interface or call structure. > > Kris > > python test.py > hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1.] > > `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py > hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1.] > > --> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py > hello from 1 out of 2 > Traceback (most recent call last): > File "test.py", line 6, in > print 'hello from',fp.tools.parallel.procID,'out > of',fp.tools.parallel.Nproc,p.globalValue > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py", > line 163, in globalValue > self.mesh._globalNonOverlappingCellIDs) > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py", > line 171, in _getGlobalValue > globalIDs = > numerix.concatenate(self.mesh.communicator.allgather(globalIDs)) > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py", > line 75, in allgather > return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj) > File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather > (src/mpi4py.MPI.c:109141) > TypeError: allgather() got an unexpected keyword argument 'recvobj' > hello from 0 out of 2 > Traceback (most recent call last): > File "test.py", line 6, in > print 'hello from',fp.tools.parallel.procID,'out > of',fp.tools.parallel.Nproc,p.globalValue > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable
Re: globalValue in parallel
It looks like 'recvobj' was removed from mpi4py about two years ago: https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602 but I'm not sure when it made it into the released version. It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py to remove the 'recvobj' argument. We'll do some tests and push a fix as soon as possible. Thanks for alerting us to the issue. Filed as https://github.com/usnistgov/fipy/issues/491 > On Apr 27, 2016, at 2:23 PM, Kris Kuhlman > wrote: > > I built the trilinos-capable version of fipy. It seems to work for serial > (even for a non-trivial case), but I am getting errors with more than one > processor with a simple call to globalValue(), which I was trying to use to > make a plot by gathering the results to procID==0 > > I used the latest git version of mpi4py and trilinos. Am I doing something > wrong (is there a different preferred way to gather things to a single > processor to save or make plots?) or do I need to use a specific version of > these packages and rebuild? It seems the function is expecting something > with a different interface or call structure. > > Kris > > python test.py > hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1.] > > `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py > hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. > 1. 1.] > > --> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py > hello from 1 out of 2 > Traceback (most recent call last): > File "test.py", line 6, in > print 'hello from',fp.tools.parallel.procID,'out > of',fp.tools.parallel.Nproc,p.globalValue > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py", > line 163, in globalValue > self.mesh._globalNonOverlappingCellIDs) > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py", > line 171, in _getGlobalValue > globalIDs = > numerix.concatenate(self.mesh.communicator.allgather(globalIDs)) > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py", > line 75, in allgather > return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj) > File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather > (src/mpi4py.MPI.c:109141) > TypeError: allgather() got an unexpected keyword argument 'recvobj' > hello from 0 out of 2 > Traceback (most recent call last): > File "test.py", line 6, in > print 'hello from',fp.tools.parallel.procID,'out > of',fp.tools.parallel.Nproc,p.globalValue > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py", > line 163, in globalValue > self.mesh._globalNonOverlappingCellIDs) > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py", > line 171, in _getGlobalValue > globalIDs = > numerix.concatenate(self.mesh.communicator.allgather(globalIDs)) > File > "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py", > line 75, in allgather > return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj) > File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allg
Re: understanding convection terms
On Tue, Apr 26, 2016 at 10:57 AM, Kris Kuhlman wrote: > Daniel, > > Thank you. I am a bit surprised that the CentralDifference basically matches > the hybrid method, and is more accurate than upwind. Remember that central difference is second order accurate. The other schemes are first order. Of course it's unstable when the Peclet number is above 2 that's why we use the other schemes. Basically, the schemes (hybrid, exponential etc) are ways to swap between central difference and upwind based on the local Peclet number. -- Daniel Wheeler ___ fipy mailing list fipy@nist.gov http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
globalValue in parallel
I built the trilinos-capable version of fipy. It seems to work for serial (even for a non-trivial case), but I am getting errors with more than one processor with a simple call to globalValue(), which I was trying to use to make a plot by gathering the results to procID==0 I used the latest git version of mpi4py and trilinos. Am I doing something wrong (is there a different preferred way to gather things to a single processor to save or make plots?) or do I need to use a specific version of these packages and rebuild? It seems the function is expecting something with a different interface or call structure. Kris python test.py hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py hello from 0 out of 1 [ 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1.] --> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py hello from 1 out of 2 Traceback (most recent call last): File "test.py", line 6, in print 'hello from',fp.tools.parallel.procID,'out of',fp.tools.parallel.Nproc,p.globalValue File "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py", line 163, in globalValue self.mesh._globalNonOverlappingCellIDs) File "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py", line 171, in _getGlobalValue globalIDs = numerix.concatenate(self.mesh.communicator.allgather(globalIDs)) File "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py", line 75, in allgather return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj) File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109141) TypeError: allgather() got an unexpected keyword argument 'recvobj' hello from 0 out of 2 Traceback (most recent call last): File "test.py", line 6, in print 'hello from',fp.tools.parallel.procID,'out of',fp.tools.parallel.Nproc,p.globalValue File "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py", line 163, in globalValue self.mesh._globalNonOverlappingCellIDs) File "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py", line 171, in _getGlobalValue globalIDs = numerix.concatenate(self.mesh.communicator.allgather(globalIDs)) File "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py", line 75, in allgather return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj) File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather (src/mpi4py.MPI.c:109141) TypeError: allgather() got an unexpected keyword argument 'recvobj' --- Primary job terminated normally, but 1 process returned a non-zero exit code.. Per user-direction, the job has been aborted. --- -- mpirun detected that one or more processes exited with non-zero status, thus causing the job to be terminated. The first process to do so was: Process name: [[1719,1],1] Exit code:1 -- import fipy as fp m = fp.Grid2D(nx=10,ny=20) p = fp.CellVariable(