Re: globalValue in parallel

2016-05-05 Thread Kris Kuhlman
Thanks for the clarification. I re-factored the figure-generation part of
my script, circumventing the issue. It appears to be working in parallel
now.

Kris

On Thu, May 5, 2016 at 11:49 AM, Guyer, Jonathan E. Dr. (Fed) <
jonathan.gu...@nist.gov> wrote:

> I'm not altogether surprised by this. Probably the right answer is that we
> shouldn't even allow you to ask for the .globalValue of a FaceVariable.
> FaceVariables just aren't part of the map that Trilinos Epetra knows about,
> so there's really no sensible way to gather.
>
> For visualization, I think you have a couple of choices. One is to gather
> the cell values and then interpolate or otherwise calculate the face values
> from those cell values; you could either load your data into a serial FiPy
> instance or use your visualization tool to do those calculations. The other
> option is to write the sub-meshes from each process into a separate Vtk
> file and use ParaView or VisIt to render the parallel files a single
> visualization.
>
> > On May 5, 2016, at 10:34 AM, Kris Kuhlman <kristopher.kuhl...@gmail.com>
> wrote:
> >
> > I believe I made the change discussed above to my local version of fipy,
> and I no longer get the error when calling the globalValue attribute of a
> mesh.
> >
> > I now notice that the .globalValue of a cellVariable seems to work as I
> would imagine, while for faceVariables the globalVariable still doesn't
> give me the whole domain.
> >
> > The attached script prints the .shape of .value and .globalValue on each
> processor (see output below).
> >
> > Is something wrong, or is there a different/better way to do this?
> >
> > Thanks,
> >
> > Kris
> >
> > ---
> >
> > > mpirun -n 1 python test.py
> > hello from 0 out of 1
> > 0 cellVariable (400,) (400,)
> >
> > 0 faceVariable (840,) (840,)
> >
> > 0 center coordinates (2, 400) (2, 400)
> >
> > 0 face coordinates (2, 840) (2, 840)
> >
> > >mpirun -n 2 python test.py
> > hello from 0 out of 2
> > hello from 1 out of 2
> > 0 cellVariable (240,) (400,)
> >
> > 1 cellVariable (240,) (400,)
> >
> > 0 faceVariable (512,) (512,)
> >
> > 1 faceVariable (512,) (512,)
> >
> > 1 center coordinates (2, 240) (2, 400)
> >
> > 0 center coordinates (2, 240) (2, 400)
> >
> > 0 face coordinates (2, 512) (2, 512)
> > 1 face coordinates (2, 512) (2, 512)
> >
> >
> >
> >
> >
> >
> > On Fri, Apr 29, 2016 at 12:26 PM, Guyer, Jonathan E. Dr. (Fed) <
> jonathan.gu...@nist.gov> wrote:
> > Absolutely
> >
> > > On Apr 29, 2016, at 11:42 AM, Kris Kuhlman <
> kristopher.kuhl...@gmail.com> wrote:
> > >
> > > Thanks for figuring this out. Will the patched fipy version be
> available from the github repository?
> > >
> > > Kris
> > >
> > > On Wed, Apr 27, 2016 at 3:17 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > > Looking into the rest of the FiPy source, we're already calling
> allgather(sendobj) in several places, and rarely calling allgather(sendobj,
> recvobj). To preserve the existing function calls (all of which are
> lower-case) and mess with the code the least, removing the recvobj argument
> appears to be the right call after all.
> > >
> > > Working on the PR.
> > >
> > > Trevor
> > >
> > > 
> > > From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of
> Guyer, Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> > > Sent: Wednesday, April 27, 2016 4:39:05 PM
> > > To: FIPY
> > > Subject: Re: globalValue in parallel
> > >
> > > It sounds like you're volunteering to put together the pull request
> with appropriate tests
> > >
> > > > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > > >
> > > > The mpi4py commit mentions that the receive object is no longer
> needed for the lower-case form of the commands. Browsing the full source
> shows that the upper-case commands retain both the send and receive
> objects. To avoid deviating too far from the MPI standard, I'd like to
> suggest changing the case (Allgather instead of allgather), rather than
> dropping buffers, in our mpi4pyCommWrapper.py.
> > > >
> > > > Trevor
> > > >
> > > >
> > > > 
> > > > From: fipy-boun...@nist.gov <fipy-boun...@nist.g

Re: globalValue in parallel

2016-05-05 Thread Guyer, Jonathan E. Dr. (Fed)
I'm not altogether surprised by this. Probably the right answer is that we 
shouldn't even allow you to ask for the .globalValue of a FaceVariable. 
FaceVariables just aren't part of the map that Trilinos Epetra knows about, so 
there's really no sensible way to gather.

For visualization, I think you have a couple of choices. One is to gather the 
cell values and then interpolate or otherwise calculate the face values from 
those cell values; you could either load your data into a serial FiPy instance 
or use your visualization tool to do those calculations. The other option is to 
write the sub-meshes from each process into a separate Vtk file and use 
ParaView or VisIt to render the parallel files a single visualization.

> On May 5, 2016, at 10:34 AM, Kris Kuhlman <kristopher.kuhl...@gmail.com> 
> wrote:
> 
> I believe I made the change discussed above to my local version of fipy, and 
> I no longer get the error when calling the globalValue attribute of a mesh.
> 
> I now notice that the .globalValue of a cellVariable seems to work as I would 
> imagine, while for faceVariables the globalVariable still doesn't give me the 
> whole domain.
> 
> The attached script prints the .shape of .value and .globalValue on each 
> processor (see output below).
> 
> Is something wrong, or is there a different/better way to do this?
> 
> Thanks,
> 
> Kris
> 
> ---
> 
> > mpirun -n 1 python test.py
> hello from 0 out of 1
> 0 cellVariable (400,) (400,)
> 
> 0 faceVariable (840,) (840,)
> 
> 0 center coordinates (2, 400) (2, 400)
> 
> 0 face coordinates (2, 840) (2, 840)
> 
> >mpirun -n 2 python test.py
> hello from 0 out of 2
> hello from 1 out of 2
> 0 cellVariable (240,) (400,)
> 
> 1 cellVariable (240,) (400,)
> 
> 0 faceVariable (512,) (512,)
> 
> 1 faceVariable (512,) (512,)
> 
> 1 center coordinates (2, 240) (2, 400)
> 
> 0 center coordinates (2, 240) (2, 400)
> 
> 0 face coordinates (2, 512) (2, 512)
> 1 face coordinates (2, 512) (2, 512)
> 
> 
> 
> 
> 
> 
> On Fri, Apr 29, 2016 at 12:26 PM, Guyer, Jonathan E. Dr. (Fed) 
> <jonathan.gu...@nist.gov> wrote:
> Absolutely
> 
> > On Apr 29, 2016, at 11:42 AM, Kris Kuhlman <kristopher.kuhl...@gmail.com> 
> > wrote:
> >
> > Thanks for figuring this out. Will the patched fipy version be available 
> > from the github repository?
> >
> > Kris
> >
> > On Wed, Apr 27, 2016 at 3:17 PM, Keller, Trevor (Fed) 
> > <trevor.kel...@nist.gov> wrote:
> > Looking into the rest of the FiPy source, we're already calling 
> > allgather(sendobj) in several places, and rarely calling allgather(sendobj, 
> > recvobj). To preserve the existing function calls (all of which are 
> > lower-case) and mess with the code the least, removing the recvobj argument 
> > appears to be the right call after all.
> >
> > Working on the PR.
> >
> > Trevor
> >
> > 
> > From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer, 
> > Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> > Sent: Wednesday, April 27, 2016 4:39:05 PM
> > To: FIPY
> > Subject: Re: globalValue in parallel
> >
> > It sounds like you're volunteering to put together the pull request with 
> > appropriate tests
> >
> > > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) 
> > > <trevor.kel...@nist.gov> wrote:
> > >
> > > The mpi4py commit mentions that the receive object is no longer needed 
> > > for the lower-case form of the commands. Browsing the full source shows 
> > > that the upper-case commands retain both the send and receive objects. To 
> > > avoid deviating too far from the MPI standard, I'd like to suggest 
> > > changing the case (Allgather instead of allgather), rather than dropping 
> > > buffers, in our mpi4pyCommWrapper.py.
> > >
> > > Trevor
> > >
> > >
> > > 
> > > From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer, 
> > > Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> > > Sent: Wednesday, April 27, 2016 3:53:39 PM
> > > To: FIPY
> > > Subject: Re: globalValue in parallel
> > >
> > > It looks like 'recvobj' was removed from mpi4py about two years ago:
> > >
> > > https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
> > >
> > > but I'm not sure when it made it into the released version.
> > >
> > >
> > > It looks li

Re: globalValue in parallel

2016-05-05 Thread Kris Kuhlman
I believe I made the change discussed above to my local version of fipy,
and I no longer get the error when calling the globalValue attribute of a
mesh.

I now notice that the .globalValue of a cellVariable seems to work as I
would imagine, while for faceVariables the globalVariable still doesn't
give me the whole domain.

The attached script prints the .shape of .value and .globalValue on each
processor (see output below).

Is something wrong, or is there a different/better way to do this?

Thanks,

Kris

---

> mpirun -n 1 python test.py
hello from 0 out of 1
0 cellVariable (400,) (400,)

0 faceVariable (840,) (840,)

0 center coordinates (2, 400) (2, 400)

0 face coordinates (2, 840) (2, 840)

>mpirun -n 2 python test.py
hello from 0 out of 2
hello from 1 out of 2
0 cellVariable (240,) (400,)

1 cellVariable (240,) (400,)

0 faceVariable (512,) (512,)

1 faceVariable (512,) (512,)

1 center coordinates (2, 240) (2, 400)

0 center coordinates (2, 240) (2, 400)

0 face coordinates (2, 512) (2, 512)
1 face coordinates (2, 512) (2, 512)






On Fri, Apr 29, 2016 at 12:26 PM, Guyer, Jonathan E. Dr. (Fed) <
jonathan.gu...@nist.gov> wrote:

> Absolutely
>
> > On Apr 29, 2016, at 11:42 AM, Kris Kuhlman <kristopher.kuhl...@gmail.com>
> wrote:
> >
> > Thanks for figuring this out. Will the patched fipy version be available
> from the github repository?
> >
> > Kris
> >
> > On Wed, Apr 27, 2016 at 3:17 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > Looking into the rest of the FiPy source, we're already calling
> allgather(sendobj) in several places, and rarely calling allgather(sendobj,
> recvobj). To preserve the existing function calls (all of which are
> lower-case) and mess with the code the least, removing the recvobj argument
> appears to be the right call after all.
> >
> > Working on the PR.
> >
> > Trevor
> >
> > 
> > From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer,
> Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> > Sent: Wednesday, April 27, 2016 4:39:05 PM
> > To: FIPY
> > Subject: Re: globalValue in parallel
> >
> > It sounds like you're volunteering to put together the pull request with
> appropriate tests
> >
> > > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > >
> > > The mpi4py commit mentions that the receive object is no longer needed
> for the lower-case form of the commands. Browsing the full source shows
> that the upper-case commands retain both the send and receive objects. To
> avoid deviating too far from the MPI standard, I'd like to suggest changing
> the case (Allgather instead of allgather), rather than dropping buffers, in
> our mpi4pyCommWrapper.py.
> > >
> > > Trevor
> > >
> > >
> > > 
> > > From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of
> Guyer, Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> > > Sent: Wednesday, April 27, 2016 3:53:39 PM
> > > To: FIPY
> > > Subject: Re: globalValue in parallel
> > >
> > > It looks like 'recvobj' was removed from mpi4py about two years ago:
> > >
> > >
> https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
> > >
> > > but I'm not sure when it made it into the released version.
> > >
> > >
> > > It looks like you can safely edit
> fipy/tools/comms/mpi4pyCommWrapper.py to remove the 'recvobj' argument.
> > >
> > >
> > > We'll do some tests and push a fix as soon as possible. Thanks for
> alerting us to the issue.
> > >
> > > Filed as https://github.com/usnistgov/fipy/issues/491
> > >
> > >
> > >> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman <
> kristopher.kuhl...@gmail.com> wrote:
> > >>
> > >> I built the trilinos-capable version of fipy. It seems to work for
> serial (even for a non-trivial case), but I am getting errors with more
> than one processor with a simple call to globalValue(), which I was trying
> to use to make a plot by gathering the results to procID==0
> > >>
> > >> I used the latest git version of mpi4py and trilinos. Am I doing
> something wrong (is there a different preferred way to gather things to a
> single processor to save or make plots?) or do I need to use a specific
> version of these packages and rebuild?  It seems the function is expecting
> something with a different interface or call structure.
> > >>
> > >&g

Re: globalValue in parallel

2016-04-29 Thread Kris Kuhlman
Thanks for figuring this out. Will the patched fipy version be available
from the github repository?

Kris

On Wed, Apr 27, 2016 at 3:17 PM, Keller, Trevor (Fed) <
trevor.kel...@nist.gov> wrote:

> Looking into the rest of the FiPy source, we're already calling
> allgather(sendobj) in several places, and rarely calling allgather(sendobj,
> recvobj). To preserve the existing function calls (all of which are
> lower-case) and mess with the code the least, removing the recvobj argument
> appears to be the right call after all.
>
> Working on the PR.
>
> Trevor
>
> 
> From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer,
> Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> Sent: Wednesday, April 27, 2016 4:39:05 PM
> To: FIPY
> Subject: Re: globalValue in parallel
>
> It sounds like you're volunteering to put together the pull request with
> appropriate tests
>
> > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> >
> > The mpi4py commit mentions that the receive object is no longer needed
> for the lower-case form of the commands. Browsing the full source shows
> that the upper-case commands retain both the send and receive objects. To
> avoid deviating too far from the MPI standard, I'd like to suggest changing
> the case (Allgather instead of allgather), rather than dropping buffers, in
> our mpi4pyCommWrapper.py.
> >
> > Trevor
> >
> >
> > 
> > From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer,
> Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> > Sent: Wednesday, April 27, 2016 3:53:39 PM
> > To: FIPY
> > Subject: Re: globalValue in parallel
> >
> > It looks like 'recvobj' was removed from mpi4py about two years ago:
> >
> >
> https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
> >
> > but I'm not sure when it made it into the released version.
> >
> >
> > It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py
> to remove the 'recvobj' argument.
> >
> >
> > We'll do some tests and push a fix as soon as possible. Thanks for
> alerting us to the issue.
> >
> > Filed as https://github.com/usnistgov/fipy/issues/491
> >
> >
> >> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman <kristopher.kuhl...@gmail.com>
> wrote:
> >>
> >> I built the trilinos-capable version of fipy. It seems to work for
> serial (even for a non-trivial case), but I am getting errors with more
> than one processor with a simple call to globalValue(), which I was trying
> to use to make a plot by gathering the results to procID==0
> >>
> >> I used the latest git version of mpi4py and trilinos. Am I doing
> something wrong (is there a different preferred way to gather things to a
> single processor to save or make plots?) or do I need to use a specific
> version of these packages and rebuild?  It seems the function is expecting
> something with a different interface or call structure.
> >>
> >> Kris
> >>
> >> python test.py
> >> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> 1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.]
> >>
> >> `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py
> >> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> 1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1

Re: globalValue in parallel

2016-04-27 Thread Keller, Trevor (Fed)
Looking into the rest of the FiPy source, we're already calling 
allgather(sendobj) in several places, and rarely calling allgather(sendobj, 
recvobj). To preserve the existing function calls (all of which are lower-case) 
and mess with the code the least, removing the recvobj argument appears to be 
the right call after all.

Working on the PR.

Trevor


From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer, 
Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
Sent: Wednesday, April 27, 2016 4:39:05 PM
To: FIPY
Subject: Re: globalValue in parallel

It sounds like you're volunteering to put together the pull request with 
appropriate tests

> On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <trevor.kel...@nist.gov> 
> wrote:
>
> The mpi4py commit mentions that the receive object is no longer needed for 
> the lower-case form of the commands. Browsing the full source shows that the 
> upper-case commands retain both the send and receive objects. To avoid 
> deviating too far from the MPI standard, I'd like to suggest changing the 
> case (Allgather instead of allgather), rather than dropping buffers, in our 
> mpi4pyCommWrapper.py.
>
> Trevor
>
>
> 
> From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer, 
> Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> Sent: Wednesday, April 27, 2016 3:53:39 PM
> To: FIPY
> Subject: Re: globalValue in parallel
>
> It looks like 'recvobj' was removed from mpi4py about two years ago:
>
> https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
>
> but I'm not sure when it made it into the released version.
>
>
> It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py to 
> remove the 'recvobj' argument.
>
>
> We'll do some tests and push a fix as soon as possible. Thanks for alerting 
> us to the issue.
>
> Filed as https://github.com/usnistgov/fipy/issues/491
>
>
>> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman <kristopher.kuhl...@gmail.com> 
>> wrote:
>>
>> I built the trilinos-capable version of fipy. It seems to work for serial 
>> (even for a non-trivial case), but I am getting errors with more than one 
>> processor with a simple call to globalValue(), which I was trying to use to 
>> make a plot by gathering the results to procID==0
>>
>> I used the latest git version of mpi4py and trilinos. Am I doing something 
>> wrong (is there a different preferred way to gather things to a single 
>> processor to save or make plots?) or do I need to use a specific version of 
>> these packages and rebuild?  It seems the function is expecting something 
>> with a different interface or call structure.
>>
>> Kris
>>
>> python test.py
>> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  
>> 1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.]
>>
>> `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py
>> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  
>> 1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.]
>>
&g

Re: globalValue in parallel

2016-04-27 Thread Guyer, Jonathan E. Dr. (Fed)
It sounds like you're volunteering to put together the pull request with 
appropriate tests

> On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <trevor.kel...@nist.gov> 
> wrote:
> 
> The mpi4py commit mentions that the receive object is no longer needed for 
> the lower-case form of the commands. Browsing the full source shows that the 
> upper-case commands retain both the send and receive objects. To avoid 
> deviating too far from the MPI standard, I'd like to suggest changing the 
> case (Allgather instead of allgather), rather than dropping buffers, in our 
> mpi4pyCommWrapper.py.
> 
> Trevor
> 
> 
> 
> From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer, 
> Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> Sent: Wednesday, April 27, 2016 3:53:39 PM
> To: FIPY
> Subject: Re: globalValue in parallel
> 
> It looks like 'recvobj' was removed from mpi4py about two years ago:
> 
> https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
> 
> but I'm not sure when it made it into the released version.
> 
> 
> It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py to 
> remove the 'recvobj' argument.
> 
> 
> We'll do some tests and push a fix as soon as possible. Thanks for alerting 
> us to the issue.
> 
> Filed as https://github.com/usnistgov/fipy/issues/491
> 
> 
>> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman <kristopher.kuhl...@gmail.com> 
>> wrote:
>> 
>> I built the trilinos-capable version of fipy. It seems to work for serial 
>> (even for a non-trivial case), but I am getting errors with more than one 
>> processor with a simple call to globalValue(), which I was trying to use to 
>> make a plot by gathering the results to procID==0
>> 
>> I used the latest git version of mpi4py and trilinos. Am I doing something 
>> wrong (is there a different preferred way to gather things to a single 
>> processor to save or make plots?) or do I need to use a specific version of 
>> these packages and rebuild?  It seems the function is expecting something 
>> with a different interface or call structure.
>> 
>> Kris
>> 
>> python test.py
>> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  
>> 1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.]
>> 
>> `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py
>> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  
>> 1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.]
>> 
>> --> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py
>> hello from 1 out of 2
>> Traceback (most recent call last):
>>  File "test.py", line 6, in 
>>print 'hello from',fp.tools.parallel.procID,'out 
>> of',fp.tools.parallel.Nproc,p.globalValue
>>  File 
>> "/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py",
>>  line 163, in globalValue
>>self.mesh._globalNonOverlappingCellIDs)
>>  File 
>> "/home/klkuhlm/local/trilinos

globalValue in parallel

2016-04-27 Thread Kris Kuhlman
I built the trilinos-capable version of fipy. It seems to work for serial
(even for a non-trivial case), but I am getting errors with more than one
processor with a simple call to globalValue(), which I was trying to use to
make a plot by gathering the results to procID==0

I used the latest git version of mpi4py and trilinos. Am I doing something
wrong (is there a different preferred way to gather things to a single
processor to save or make plots?) or do I need to use a specific version of
these packages and rebuild?  It seems the function is expecting something
with a different interface or call structure.

Kris

python test.py
hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.]

`--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python
test.py
hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.]

--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py
hello from 1 out of 2
Traceback (most recent call last):
  File "test.py", line 6, in 
print 'hello from',fp.tools.parallel.procID,'out
of',fp.tools.parallel.Nproc,p.globalValue
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py",
line 163, in globalValue
self.mesh._globalNonOverlappingCellIDs)
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py",
line 171, in _getGlobalValue
globalIDs =
numerix.concatenate(self.mesh.communicator.allgather(globalIDs))
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py",
line 75, in allgather
return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj)
  File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather
(src/mpi4py.MPI.c:109141)
TypeError: allgather() got an unexpected keyword argument 'recvobj'
hello from 0 out of 2
Traceback (most recent call last):
  File "test.py", line 6, in 
print 'hello from',fp.tools.parallel.procID,'out
of',fp.tools.parallel.Nproc,p.globalValue
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py",
line 163, in globalValue
self.mesh._globalNonOverlappingCellIDs)
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py",
line 171, in _getGlobalValue
globalIDs =
numerix.concatenate(self.mesh.communicator.allgather(globalIDs))
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py",
line 75, in allgather
return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj)
  File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather
(src/mpi4py.MPI.c:109141)
TypeError: allgather() got an unexpected keyword argument 'recvobj'
---
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
---
--
mpirun detected that one or more processes exited with non-zero status,
thus causing
the job to be terminated. The first process to do so was:

  Process name: [[1719,1],1]
  Exit code:1
--
import fipy as fp

m = fp.Grid2D(nx=10,ny=20)
p =