Re: newton iteration

2016-05-13 Thread Kris Kuhlman
Thank you for posting this.

I was initially figuring I would approximate the variation in x with a
finite difference approximation. This would be much less accurate but
simple. Using analytical derivatives makes sense, though.

Kris

On Fri, May 13, 2016 at 12:12 PM, Guyer, Jonathan E. Dr. (Fed) <
jonathan.gu...@nist.gov> wrote:

> I have posted an implementation at
>
>   https://gist.github.com/guyer/f29c759fd7f0f01363b8483c7bc644cb
>
> I'm not sure the way that I determine the Jacobian expression is
> completely legitimate, but it seems to work. Please don't hesitate to ask
> any questions (or offer corrections!).
>
>
>
> > On May 11, 2016, at 4:57 PM, Guyer, Jonathan E. Dr. (Fed) <
> jonathan.gu...@nist.gov> wrote:
> >
> > I'm not sure I have anything posted publicly. I will put together a
> minimal example.
> >
> >> On May 11, 2016, at 12:42 PM, Daniel Wheeler 
> wrote:
> >>
> >> Hi Kris,
> >>
> >> FiPy doesn't have an automated way to do Newton iterations. You can
> >> always construct your own Newton iteration scheme using the terms and
> >> equations as you would ordinarily, but then you have to do the
> >> variational derivatives and the coupling by hand. This also assumes
> >> that you are familiar with the Newton method. You can query an
> >> equation for its residual which then needs to be added to the Newton
> >> version of the equation. I think that means that each equation
> >> requires two implementations, the regular and the Newton.
> >>
> >> Regarding examples of using FiPy with Newton iterations, I don't
> >> believe that we have any examples in the source code although I do
> >> know that some people have used it in this way including Jon Guyer. He
> >> may have examples in Github somewhere that would help you get started,
> >> but I'll let him point you to them.
> >>
> >> Cheers,
> >>
> >> Daniel
> >>
> >> On Tue, May 10, 2016 at 9:31 AM, Kris Kuhlman
> >>  wrote:
> >>> I am interested in trying to use newton iterations, rather than simply
> >>> fixed-point iterations, to speed up the convergence of the non-linear
> >>> iterations in my fipy problem.
> >>>
> >>> I have found this mention of a term useful for newton iterations,
> >>>
> >>>
> http://www.ctcms.nist.gov/fipy/fipy/generated/fipy.terms.html#module-fipy.terms.residualTerm
> >>>
> >>> and I see this mention of an example using newton iterations
> >>>
> >>> https://github.com/usnistgov/fipy/wiki/ScharfetterGummel
> >>>
> >>> but I don't see the actual code it is talking about. Is there an
> example
> >>> available somewhere?
> >>>
> >>> Kris
> >>>
> >>> ___
> >>> fipy mailing list
> >>> fipy@nist.gov
> >>> http://www.ctcms.nist.gov/fipy
> >>> [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
> >>>
> >>
> >>
> >>
> >> --
> >> Daniel Wheeler
> >> ___
> >> fipy mailing list
> >> fipy@nist.gov
> >> http://www.ctcms.nist.gov/fipy
> >> [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
> >
> >
> > ___
> > fipy mailing list
> > fipy@nist.gov
> > http://www.ctcms.nist.gov/fipy
> >  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
>
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


newton iteration

2016-05-10 Thread Kris Kuhlman
I am interested in trying to use newton iterations, rather than simply
fixed-point iterations, to speed up the convergence of the non-linear
iterations in my fipy problem.

I have found this mention of a term useful for newton iterations,

http://www.ctcms.nist.gov/fipy/fipy/generated/fipy.terms.html#module-fipy.terms.residualTerm

and I see this mention of an example using newton iterations

https://github.com/usnistgov/fipy/wiki/ScharfetterGummel

but I don't see the actual code it is talking about. Is there an example
available somewhere?

Kris
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: globalValue in parallel

2016-05-05 Thread Kris Kuhlman
Thanks for the clarification. I re-factored the figure-generation part of
my script, circumventing the issue. It appears to be working in parallel
now.

Kris

On Thu, May 5, 2016 at 11:49 AM, Guyer, Jonathan E. Dr. (Fed) <
jonathan.gu...@nist.gov> wrote:

> I'm not altogether surprised by this. Probably the right answer is that we
> shouldn't even allow you to ask for the .globalValue of a FaceVariable.
> FaceVariables just aren't part of the map that Trilinos Epetra knows about,
> so there's really no sensible way to gather.
>
> For visualization, I think you have a couple of choices. One is to gather
> the cell values and then interpolate or otherwise calculate the face values
> from those cell values; you could either load your data into a serial FiPy
> instance or use your visualization tool to do those calculations. The other
> option is to write the sub-meshes from each process into a separate Vtk
> file and use ParaView or VisIt to render the parallel files a single
> visualization.
>
> > On May 5, 2016, at 10:34 AM, Kris Kuhlman 
> wrote:
> >
> > I believe I made the change discussed above to my local version of fipy,
> and I no longer get the error when calling the globalValue attribute of a
> mesh.
> >
> > I now notice that the .globalValue of a cellVariable seems to work as I
> would imagine, while for faceVariables the globalVariable still doesn't
> give me the whole domain.
> >
> > The attached script prints the .shape of .value and .globalValue on each
> processor (see output below).
> >
> > Is something wrong, or is there a different/better way to do this?
> >
> > Thanks,
> >
> > Kris
> >
> > ---
> >
> > > mpirun -n 1 python test.py
> > hello from 0 out of 1
> > 0 cellVariable (400,) (400,)
> >
> > 0 faceVariable (840,) (840,)
> >
> > 0 center coordinates (2, 400) (2, 400)
> >
> > 0 face coordinates (2, 840) (2, 840)
> >
> > >mpirun -n 2 python test.py
> > hello from 0 out of 2
> > hello from 1 out of 2
> > 0 cellVariable (240,) (400,)
> >
> > 1 cellVariable (240,) (400,)
> >
> > 0 faceVariable (512,) (512,)
> >
> > 1 faceVariable (512,) (512,)
> >
> > 1 center coordinates (2, 240) (2, 400)
> >
> > 0 center coordinates (2, 240) (2, 400)
> >
> > 0 face coordinates (2, 512) (2, 512)
> > 1 face coordinates (2, 512) (2, 512)
> >
> >
> >
> >
> >
> >
> > On Fri, Apr 29, 2016 at 12:26 PM, Guyer, Jonathan E. Dr. (Fed) <
> jonathan.gu...@nist.gov> wrote:
> > Absolutely
> >
> > > On Apr 29, 2016, at 11:42 AM, Kris Kuhlman <
> kristopher.kuhl...@gmail.com> wrote:
> > >
> > > Thanks for figuring this out. Will the patched fipy version be
> available from the github repository?
> > >
> > > Kris
> > >
> > > On Wed, Apr 27, 2016 at 3:17 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > > Looking into the rest of the FiPy source, we're already calling
> allgather(sendobj) in several places, and rarely calling allgather(sendobj,
> recvobj). To preserve the existing function calls (all of which are
> lower-case) and mess with the code the least, removing the recvobj argument
> appears to be the right call after all.
> > >
> > > Working on the PR.
> > >
> > > Trevor
> > >
> > > 
> > > From: fipy-boun...@nist.gov  on behalf of
> Guyer, Jonathan E. Dr. (Fed) 
> > > Sent: Wednesday, April 27, 2016 4:39:05 PM
> > > To: FIPY
> > > Subject: Re: globalValue in parallel
> > >
> > > It sounds like you're volunteering to put together the pull request
> with appropriate tests
> > >
> > > > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > > >
> > > > The mpi4py commit mentions that the receive object is no longer
> needed for the lower-case form of the commands. Browsing the full source
> shows that the upper-case commands retain both the send and receive
> objects. To avoid deviating too far from the MPI standard, I'd like to
> suggest changing the case (Allgather instead of allgather), rather than
> dropping buffers, in our mpi4pyCommWrapper.py.
> > > >
> > > > Trevor
> > > >
> > > >
> > > > 
> > > > From: fipy-boun...@nist.gov  on behalf of
> Guyer, Jonathan E. Dr. (Fed) 
> > > > Sent: Wednes

Re: globalValue in parallel

2016-05-05 Thread Kris Kuhlman
I believe I made the change discussed above to my local version of fipy,
and I no longer get the error when calling the globalValue attribute of a
mesh.

I now notice that the .globalValue of a cellVariable seems to work as I
would imagine, while for faceVariables the globalVariable still doesn't
give me the whole domain.

The attached script prints the .shape of .value and .globalValue on each
processor (see output below).

Is something wrong, or is there a different/better way to do this?

Thanks,

Kris

---

> mpirun -n 1 python test.py
hello from 0 out of 1
0 cellVariable (400,) (400,)

0 faceVariable (840,) (840,)

0 center coordinates (2, 400) (2, 400)

0 face coordinates (2, 840) (2, 840)

>mpirun -n 2 python test.py
hello from 0 out of 2
hello from 1 out of 2
0 cellVariable (240,) (400,)

1 cellVariable (240,) (400,)

0 faceVariable (512,) (512,)

1 faceVariable (512,) (512,)

1 center coordinates (2, 240) (2, 400)

0 center coordinates (2, 240) (2, 400)

0 face coordinates (2, 512) (2, 512)
1 face coordinates (2, 512) (2, 512)






On Fri, Apr 29, 2016 at 12:26 PM, Guyer, Jonathan E. Dr. (Fed) <
jonathan.gu...@nist.gov> wrote:

> Absolutely
>
> > On Apr 29, 2016, at 11:42 AM, Kris Kuhlman 
> wrote:
> >
> > Thanks for figuring this out. Will the patched fipy version be available
> from the github repository?
> >
> > Kris
> >
> > On Wed, Apr 27, 2016 at 3:17 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > Looking into the rest of the FiPy source, we're already calling
> allgather(sendobj) in several places, and rarely calling allgather(sendobj,
> recvobj). To preserve the existing function calls (all of which are
> lower-case) and mess with the code the least, removing the recvobj argument
> appears to be the right call after all.
> >
> > Working on the PR.
> >
> > Trevor
> >
> > 
> > From: fipy-boun...@nist.gov  on behalf of Guyer,
> Jonathan E. Dr. (Fed) 
> > Sent: Wednesday, April 27, 2016 4:39:05 PM
> > To: FIPY
> > Subject: Re: globalValue in parallel
> >
> > It sounds like you're volunteering to put together the pull request with
> appropriate tests
> >
> > > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> > >
> > > The mpi4py commit mentions that the receive object is no longer needed
> for the lower-case form of the commands. Browsing the full source shows
> that the upper-case commands retain both the send and receive objects. To
> avoid deviating too far from the MPI standard, I'd like to suggest changing
> the case (Allgather instead of allgather), rather than dropping buffers, in
> our mpi4pyCommWrapper.py.
> > >
> > > Trevor
> > >
> > >
> > > 
> > > From: fipy-boun...@nist.gov  on behalf of
> Guyer, Jonathan E. Dr. (Fed) 
> > > Sent: Wednesday, April 27, 2016 3:53:39 PM
> > > To: FIPY
> > > Subject: Re: globalValue in parallel
> > >
> > > It looks like 'recvobj' was removed from mpi4py about two years ago:
> > >
> > >
> https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
> > >
> > > but I'm not sure when it made it into the released version.
> > >
> > >
> > > It looks like you can safely edit
> fipy/tools/comms/mpi4pyCommWrapper.py to remove the 'recvobj' argument.
> > >
> > >
> > > We'll do some tests and push a fix as soon as possible. Thanks for
> alerting us to the issue.
> > >
> > > Filed as https://github.com/usnistgov/fipy/issues/491
> > >
> > >
> > >> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman <
> kristopher.kuhl...@gmail.com> wrote:
> > >>
> > >> I built the trilinos-capable version of fipy. It seems to work for
> serial (even for a non-trivial case), but I am getting errors with more
> than one processor with a simple call to globalValue(), which I was trying
> to use to make a plot by gathering the results to procID==0
> > >>
> > >> I used the latest git version of mpi4py and trilinos. Am I doing
> something wrong (is there a different preferred way to gather things to a
> single processor to save or make plots?) or do I need to use a specific
> version of these packages and rebuild?  It seems the function is expecting
> something with a different interface or call structure.
> > >>
> > >> Kris
> > >>
> > >> python test.py
> > >> hello from 0 out of 1 [ 1.  

Re: globalValue in parallel

2016-04-29 Thread Kris Kuhlman
Thanks for figuring this out. Will the patched fipy version be available
from the github repository?

Kris

On Wed, Apr 27, 2016 at 3:17 PM, Keller, Trevor (Fed) <
trevor.kel...@nist.gov> wrote:

> Looking into the rest of the FiPy source, we're already calling
> allgather(sendobj) in several places, and rarely calling allgather(sendobj,
> recvobj). To preserve the existing function calls (all of which are
> lower-case) and mess with the code the least, removing the recvobj argument
> appears to be the right call after all.
>
> Working on the PR.
>
> Trevor
>
> 
> From: fipy-boun...@nist.gov  on behalf of Guyer,
> Jonathan E. Dr. (Fed) 
> Sent: Wednesday, April 27, 2016 4:39:05 PM
> To: FIPY
> Subject: Re: globalValue in parallel
>
> It sounds like you're volunteering to put together the pull request with
> appropriate tests
>
> > On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <
> trevor.kel...@nist.gov> wrote:
> >
> > The mpi4py commit mentions that the receive object is no longer needed
> for the lower-case form of the commands. Browsing the full source shows
> that the upper-case commands retain both the send and receive objects. To
> avoid deviating too far from the MPI standard, I'd like to suggest changing
> the case (Allgather instead of allgather), rather than dropping buffers, in
> our mpi4pyCommWrapper.py.
> >
> > Trevor
> >
> >
> > 
> > From: fipy-boun...@nist.gov  on behalf of Guyer,
> Jonathan E. Dr. (Fed) 
> > Sent: Wednesday, April 27, 2016 3:53:39 PM
> > To: FIPY
> > Subject: Re: globalValue in parallel
> >
> > It looks like 'recvobj' was removed from mpi4py about two years ago:
> >
> >
> https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
> >
> > but I'm not sure when it made it into the released version.
> >
> >
> > It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py
> to remove the 'recvobj' argument.
> >
> >
> > We'll do some tests and push a fix as soon as possible. Thanks for
> alerting us to the issue.
> >
> > Filed as https://github.com/usnistgov/fipy/issues/491
> >
> >
> >> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman 
> wrote:
> >>
> >> I built the trilinos-capable version of fipy. It seems to work for
> serial (even for a non-trivial case), but I am getting errors with more
> than one processor with a simple call to globalValue(), which I was trying
> to use to make a plot by gathering the results to procID==0
> >>
> >> I used the latest git version of mpi4py and trilinos. Am I doing
> something wrong (is there a different preferred way to gather things to a
> single processor to save or make plots?) or do I need to use a specific
> version of these packages and rebuild?  It seems the function is expecting
> something with a different interface or call structure.
> >>
> >> Kris
> >>
> >> python test.py
> >> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> 1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.]
> >>
> >> `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py
> >> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> 1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
> >>  1.  1.  1

globalValue in parallel

2016-04-27 Thread Kris Kuhlman
I built the trilinos-capable version of fipy. It seems to work for serial
(even for a non-trivial case), but I am getting errors with more than one
processor with a simple call to globalValue(), which I was trying to use to
make a plot by gathering the results to procID==0

I used the latest git version of mpi4py and trilinos. Am I doing something
wrong (is there a different preferred way to gather things to a single
processor to save or make plots?) or do I need to use a specific version of
these packages and rebuild?  It seems the function is expecting something
with a different interface or call structure.

Kris

python test.py
hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.]

`--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python
test.py
hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
  1.  1.]

--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 2 python test.py
hello from 1 out of 2
Traceback (most recent call last):
  File "test.py", line 6, in 
print 'hello from',fp.tools.parallel.procID,'out
of',fp.tools.parallel.Nproc,p.globalValue
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py",
line 163, in globalValue
self.mesh._globalNonOverlappingCellIDs)
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py",
line 171, in _getGlobalValue
globalIDs =
numerix.concatenate(self.mesh.communicator.allgather(globalIDs))
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py",
line 75, in allgather
return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj)
  File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather
(src/mpi4py.MPI.c:109141)
TypeError: allgather() got an unexpected keyword argument 'recvobj'
hello from 0 out of 2
Traceback (most recent call last):
  File "test.py", line 6, in 
print 'hello from',fp.tools.parallel.procID,'out
of',fp.tools.parallel.Nproc,p.globalValue
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/cellVariable.py",
line 163, in globalValue
self.mesh._globalNonOverlappingCellIDs)
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/variables/meshVariable.py",
line 171, in _getGlobalValue
globalIDs =
numerix.concatenate(self.mesh.communicator.allgather(globalIDs))
  File
"/home/klkuhlm/local/trilinos-fipy/anaconda/lib/python2.7/site-packages/fipy/tools/comms/mpi4pyCommWrapper.py",
line 75, in allgather
return self.mpi4py_comm.allgather(sendobj=sendobj, recvobj=recvobj)
  File "MPI/Comm.pyx", line 1288, in mpi4py.MPI.Comm.allgather
(src/mpi4py.MPI.c:109141)
TypeError: allgather() got an unexpected keyword argument 'recvobj'
---
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
---
--
mpirun detected that one or more processes exited with non-zero status,
thus causing
the job to be terminated. The first process to do so was:

  Process name: [[1719,1],1]
  Exit code:1
--
import fipy as fp

m = fp.Grid2D(nx=10,ny=20)
p = fp.CellVariable(

Re: understanding convection terms

2016-04-26 Thread Kris Kuhlman
Daniel,

Thank you. I am a bit surprised that the CentralDifference basically
matches the hybrid method, and is more accurate than upwind.

Kris



On Tue, Apr 26, 2016 at 8:22 AM, Daniel Wheeler 
wrote:

> Hi Kris,
>
> Good to hear from you again and a very nicely coded example.
>
> I think it's a solver issue. I tried the LU solver and it gives
> perfect results. See
> https://gist.github.com/wd15/affe4d82cc2a189d894a7d774e4bc00b.
>
> This might suggest that we need to change the default solver when
> using the central difference scheme.
>
> Cheers,
>
> Daniel
>
> On Mon, Apr 25, 2016 at 11:54 AM, Kris Kuhlman
>  wrote:
> > I am trying to understand the convection terms available in fipy through
> a
> > simple steady-state problem. I am surprised at the divergence of the
> > CentralDifferenceConvectionTerm, is this to be expected? As the
> > discretization in the mesh is made finer, the solution gets worse!?
> >
> > The problem is: \frac{\partial^2 u}{\partial x^2} - v \frac{\partial
> > u}{\partial x}  = 0
> >
> > The script at the link below compares the solution using different
> > ConvectionTerms, and plots the figures attached for different values of
> nx.
> > For larger values of v, the solution diverges even more and becomes
> > oscillatory.
> >
> > https://gist.github.com/klkuhlm/07f9eaf52b24e103f60ae213c0944c21
> >
> > Is this expected behavior?
> >
> > Kris
> >
> >
> > ___
> > fipy mailing list
> > fipy@nist.gov
> > http://www.ctcms.nist.gov/fipy
> >   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
> >
>
>
>
> --
> Daniel Wheeler
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: Trouble with an Ubuntu Installation of FiPy

2015-02-06 Thread Kris Kuhlman
I use fipy on Ubuntu, but I have never tried to install the .deb version.

It sounds like you have all the prerequisites. Either use "pip" (which you
may have to install as "python-pip" via apt-get) or download the source and
install that directly: http://www.ctcms.nist.gov/fipy/INSTALLATION.html

--

It is not necessary to formally install *FiPy*
, but
if you wish to do so and you are confident that all of the requisite
packages have been installed properly, you can install it by typing:

$ pip install fipy

 or by unpacking the archive and typing:

$ python setup.py install

 at the command line in the base *FiPy*

directory.




On Thu, Feb 5, 2015 at 8:41 PM, Kyle Lawlor  wrote:

> Hi, again.
>
> A bit more information that may help diagnose the problem.
> I am using the Ubuntu OS(14.04) pre-installed python.
> I am able to import in my Python shell; numpy, scipy, matplotlib and
> pysparse.
> It seems for some reason the fipy debian file can't locate the packages.
> Currently the debian file is in my user home folder. (The standard
> directory that typically would show up opening a terminal and using ls).
> Could it be the location of this file?
>
> Cheers,
> Kyle
>
> On Thu, Feb 5, 2015 at 6:03 PM, Kyle Lawlor  wrote:
>
>> Hi all,
>>
>> I'm having trouble installing FiPy on my Ubuntu OS.
>> I've followed the installation from the website:
>>
>> $ VERSION=x.y-z  # choose the version you want
>> $ apt-get install gmsh libsuperlu3 python-central python-sparse
>> $ curl -O 
>> http://www.ctcms.nist.gov/fipy/download/python-fipy_${VERSION}_all.deb
>> $ dpkg -i python-fipy_${VERSION}_all.deb
>>
>> There was an error in the apt-get line for python-central:
>>
>> "Package python-central is not available, but is referred to by another
>> package.
>> This may mean that the package is missing, has been obsoleted, or
>> is only available from another source
>>
>> E: Package 'python-central' has no installation candidate"
>>
>> Does this package 'python-central' not exist anymore? Is there another I
>> can download in its place?
>>
>> I went ahead with the curl and dpkg commands.
>> I got some more errors.
>>
>> "kblawlor@wizrd:~$ sudo dpkg -i python-fipy_${VERSION}_all.deb
>> Selecting previously unselected package python-fipy.
>> (Reading database ... 194884 files and directories currently installed.)
>> Preparing to unpack python-fipy_3.1-1_all.deb ...
>> Unpacking python-fipy (3.1-1) ...
>> dpkg: dependency problems prevent configuration of python-fipy:
>>  python-fipy depends on python-numpy; however:
>>   Package python-numpy is not installed.
>>  python-fipy depends on python-sparse (>= 1.1); however:
>>   Package python-sparse is not installed.
>>  python-fipy depends on python-matplotlib; however:
>>   Package python-matplotlib is not installed.
>>  python-fipy depends on python-scipy; however:
>>   Package python-scipy is not installed.
>>  python-fipy depends on gmsh; however:
>>   Package gmsh is not installed.
>>
>> dpkg: error processing package python-fipy (--install):
>>  dependency problems - leaving unconfigured
>> Errors were encountered while processing:
>>  python-fipy"
>>
>> These dependency problems are probably related to not having
>> 'python-central'.
>> Any pointers?
>>
>> Thanks,
>> Kyle
>>
>
>
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
>
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: Parallellizing do not have any benefit. Because trilinos is slower than standart solver (Simple Version)

2014-09-29 Thread Kris Kuhlman
there are a couple recent ipython notebooks that displayed the parallel
capabilities (and issues) of fipy.  maybe they are useful?

http://wd15.github.io/2014/02/20/parallel-fipy-in-ipython/


On Mon, Sep 29, 2014 at 12:00 PM, Serbulent UNSAL 
wrote:

> Hi Again,
>
> To simplify problem in my previous mail I use mesh20x20 example with a
> 400x400 mesh. Code is below. And results have same problem;
>
> http://pastebin.com/iUPSs6UJ
>
> With standard solver;
> Total time for 100 steps: 00:00:41
>
> With tirilinos with single core;
> Total time for 100 steps: 00:02:17
>
> With trilinos with 8 cores;
> Total time for 100 steps: 00:01:19
>
> Still has same issue with trilinos :(
>
> Thanks,
>
> Serbulent
>
>
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
>
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: Unexpected result, possibly wrong, result solving 1D unsteady heat equation with spatially-varying diffusion coefficient

2014-08-08 Thread Kris Kuhlman
Conor,

if you reduce the thermal conductivity in the insulation to about 0.1, the
fipy solution looks about like the other model (the knee in T is about at
400 degrees C).  Is there an issue with how your compute or specify this in
fipy or the other model?

Kris


On Fri, Aug 8, 2014 at 9:35 AM, Conor Fleming 
wrote:

> Hi,
>
> I am using FiPy to determine the depth of heat penetration into concrete
> structures due to fire over a certain period of time. I am solving the
> unsteady heat equation on a 1D grid, and modelling various scenarios, e.g.
> time-dependent temperature boundary condition, temperature-dependent
> diffusion coefficient. For these cases, the model compares very well to
> results from other solvers (for example the commercial finite element
> solver TEMP/W). However I am having trouble modelling problems with a
> spatially-varying diffusion coefficient, in particular, a two layer model
> representing a concrete wall covered with a layer of insulating material.
>
> I have attached a number of images to clarify the issue. The first,
> 'FiPy_TEMPW_insulation_only.png' shows the temperature distribution in a
> single material (the insulation) with constant diffusivity, D_ins, when a
> constant temperature of 1200 C is applied at one boundary for 3 hours. The
> FiPy result agrees very well with the analytical solution
>
> 1 - erf(x/2(sqrt(D_ins t))*1200 +20,
>
> taken from the examples.diffusion.mesh1D example (scaled and shifted
> appropriately) and with a numerical solution calculated using TEMP/W (using
> the same spatial and temporal discretisation, material properties and
> boundary conditions). The three results agree well, showing that the FiPy
> model is performing as expected.
>
> The second figure, 'FiPy_TEMPW_insulated_concrete.png', presents the
> temperature distribution through an insulated concrete wall (where for
> simplicity the concrete is also modelled with constant diffusivity, D_con)
> for the same surface temperature and period. There is now a considerable
> difference between the FiPy and TEMP/W predictions. The FiPy model predicts
> a lower temperature gradient in the insulation layer, which leads to higher
> temperatures throughout the domain.
>
> I am confident that the TEMP/W result is accurate, as it agrees perfectly
> with a simple explicit finite difference solution coded in FORTRAN. I have
> tried to identify any coding errors I have made in my FiPy script. I am
> aware that diffusion terms are solved at the grid faces, so when the
> diffusion coefficient is a function of a CellVariable, an appropriate
> interpolation scheme must be used to obtain sensible face values. However,
> in my case the diffusion coefficients, D_ins and D_conc, are created as
> FaceVariables and assigned constant values. I have also examined the
> effects of space and time discretisation, implicit and explicit
> DiffusionTerm, multiple sweeps per timestep, but these have made no
> significant difference.
>
> I would be very interested to hear anyone's opinion on what I might be
> doing wrong here. Also, does anyone think it is possible for FiPy to
> deliver an accurate result, and for the finite difference and finite volume
> solvers to be wrong? Below this email I have written out a minimal working
> example, which reproduces the 'FiPy' curve from figure
> 'FiPy_TEMPW_insulated_concrete.png'.
>
> Many thanks,
> Conor
>
> --
> Conor Fleming
> Research student, Civil Engineering Group
> Dept. of Engineering Science, University of Oxford
>
> ###
> # FiPy script to solve 1D heat equation for two-layer material
> #
> import fipy as fp
>
>
> nx = 45
> dx = 0.009 # grid spacing, m
> dt = 20. # timestep, s
>
>
> mesh = fp.Grid1D(nx=nx, dx=dx) # define 1D grid
>
>
> # define temperature variable, phi
> phi = fp.CellVariable(name="Fipy", mesh=mesh, value=20.)
>
>
> # insulation thermal properties
> thick_ins = 0.027 # insulation thickness
> k_ins = 0.212
> rho_ins = 900.
> cp_ins = 1000.
> D_ins = k_ins / (rho_ins * cp_ins) # insulation diffusivity
>
>
> # concrete thermal properties
> k_con = 1.5
> rho_con = 2300.
> cp_con = 1100.
> D_con = k_con / (rho_con * cp_con) # concrete diffusivity
>
>
> valueLeft = 1200. # set temperature at edge of domain
> phi.constrain(valueLeft, mesh.facesLeft) # apply boundary condition
>
>
> # create diffusion coefficient as a FaceVariable
> D = fp.FaceVariable(mesh=mesh, value=D_ins)
> X = mesh.faceCenters.value[0]
> D.setValue(D_con, where=X>thick_ins) # change diffusivity in concrete
> region
>
>
> # unsteady heat equation
> eqn = fp.TransientTerm() == fp.DiffusionTerm(coeff=D)
>
>
> # specify viewer
> viewer = fp.Viewer(vars=phi, datamin=0., datamax=1200.)
>
>
> # solve equation
> t = 0.
> while t < 10800: # simulate for 3 hours
> t += dt # advance time
> eqn.solve(var=phi, dt=dt) # solve equation
> viewer.plot() # plot result
> ___
> fipy mailing l

Re: Equation changing with time

2014-06-01 Thread Kris Kuhlman
Zebo,

I have solved this type of problem by specifying v in an if statement
within the time-marching loop.  I am not sure if there is a more elegant or
efficient way of doing it.

for step in range(steps):time += timeStepif time < 0.5:
u.setValue((1.0,))elif time < 1.0:u.setValue((0.0,))
else:u.setValue((-1.0,))eqn.solve(dt=timeStep)
viewer.plot()


The whole script is at:

https://gist.github.com/klkuhlm/d97eb37da84ce7329de6

Kris




On Sun, Jun 1, 2014 at 4:16 PM, Zebo LI  wrote:

> Hello,
> I am working on a diffusion equation, but the convection velocity will
> change with time: v=v(t). So is there any way to deal with time-dependent
> convection term?
>
> best,
> Zebo
>
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
>
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: parallel execution with nonUniformGrid3D

2014-02-27 Thread Kris Kuhlman
These IPython notebooks are great.  I had some setup issues getting started
on my computer (conflicting MPI versions), but that is solved.  There
appear to be some subtle differences between the version of pandas I have
and what you made the notebook with.  I can't plot the results (NaN values
in the dataframes result in some errors), but this is not an important
issue right now.

e.g., there is an issue with this line:
runtimes = runtimes.append({'N' : N, 'iterations' : iterations, 'mode' :
mode, 'suite' : suite, 'run time' : runtime})

pandas complains about needing "ignore_index=True" to do this operation

I know this is not the place to ask about pandas, but I thought I would
point that out.

Thanks for providing these as an example.  I just need to spend some more
time trying to understand the IPython notebook and parallel-calculation
framework, to see how I can apply things from these notebooks to what I
want to do.  It is all pretty slick, but I am like a fish out of water when
I can't use my emacs keybindings :)

Kris




On Fri, Feb 21, 2014 at 11:40 AM, Daniel Wheeler
wrote:

> On Wed, Jan 22, 2014 at 2:16 PM, Kris Kuhlman
>  wrote:
> > I am trying to get a fipy problem to run faster in parallel.  I have
> > successfully installed version 3.1 of fipy and tested it with trilinos
> and
> > pysparse.  I have run the tests suggested at
>
> Hi Kris,
>
> I looked into using FiPy in parallel a little bit more, just with a
> very simple problem in 3D. The results are in two IPython notebooks.
> See
>
>
> http://nbviewer.ipython.org/github/wd15/fipy-efficiency/tree/master/notebooks/
>
> Parallel efficiency is anything between 0.25 and 0.6 for 48 processes.
>
>
> --
> Daniel Wheeler
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: parallel execution with nonUniformGrid3D

2014-01-24 Thread Kris Kuhlman
The code I am referring to above is available at

https://gist.github.com/klkuhlm/8601770#file-3d-heat-conduction-py

There is a T/F flag near the top to switch between uniform and non-uniform
mesh for comparison.  You also have to set noViewer to True for parallel,
since mayavi2 and MPI don't seem to play well.

Sorry it is not reduced down to a simpler test case.

Kris


On Wed, Jan 22, 2014 at 12:16 PM, Kris Kuhlman  wrote:

> I am trying to get a fipy problem to run faster in parallel.  I have
> successfully installed version 3.1 of fipy and tested it with trilinos and
> pysparse.  I have run the tests suggested at
>
> http://www.ctcms.nist.gov/fipy/documentation/USAGE.html#parallel
>
> and everything appears to work (no errors and I am able to import the
> required libraries).
>
> My desired use case uses a nonuniform 3D grid (from
> fipy.meshes.nonUniformGrid3D import NonUniformGrid3D).  Running this in
> parallel, it takes about the same amount of time or longer.  If I switch
> the same case to a uniform 3D grid (Grid3D), parallel execution is faster
> than serial, as I expected (although not exactly optimal -- see attached
> plot).
>
> At the above link it states:
>
> *FiPy*<http://www.ctcms.nist.gov/fipy/documentation/glossary.html#term-fipy>can
>  use
> *Trilinos*<http://www.ctcms.nist.gov/fipy/documentation/glossary.html#term-trilinos>to
>  solve equations in parallel. Most mesh classes in
> fipy.meshes<http://www.ctcms.nist.gov/fipy/fipy/generated/fipy.meshes.html#module-fipy.meshes>can
>  solve in parallel. This includes all “
> ...Grid...” and “...Gmsh...” class meshes. Currently, the only remaining
> serial-only meshes are 
> Tri2D<http://www.ctcms.nist.gov/fipy/fipy/generated/fipy.meshes.html#fipy.meshes.tri2D.Tri2D>and
> SkewedGrid2D<http://www.ctcms.nist.gov/fipy/fipy/generated/fipy.meshes.html#fipy.meshes.skewedGrid2D.SkewedGrid2D>
> .
>
> which made me think parallel should work with the NonUniformGrid3D.  Is
> there some other change that must be made to get speedups with nonuniform
> grids?
>
> I can provide the code I am working with, if this would help.
>
> Kris
>
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


parallel execution with nonUniformGrid3D

2014-01-22 Thread Kris Kuhlman
I am trying to get a fipy problem to run faster in parallel.  I have
successfully installed version 3.1 of fipy and tested it with trilinos and
pysparse.  I have run the tests suggested at

http://www.ctcms.nist.gov/fipy/documentation/USAGE.html#parallel

and everything appears to work (no errors and I am able to import the
required libraries).

My desired use case uses a nonuniform 3D grid (from
fipy.meshes.nonUniformGrid3D import NonUniformGrid3D).  Running this in
parallel, it takes about the same amount of time or longer.  If I switch
the same case to a uniform 3D grid (Grid3D), parallel execution is faster
than serial, as I expected (although not exactly optimal -- see attached
plot).

At the above link it states:

*FiPy*can
use
*Trilinos*to
solve equations in parallel. Most mesh classes in
fipy.meshescan
solve in parallel. This includes all “
...Grid...” and “...Gmsh...” class meshes. Currently, the only remaining
serial-only meshes are
Tri2Dand
SkewedGrid2D
.

which made me think parallel should work with the NonUniformGrid3D.  Is
there some other change that must be made to get speedups with nonuniform
grids?

I can provide the code I am working with, if this would help.

Kris
<>___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: piecewise functional parameter dependence

2012-10-12 Thread Kris Kuhlman
On Fri, Oct 12, 2012 at 9:03 AM, Jonathan Guyer  wrote:
> The script below illustrates the way I would do this.
>
> Note that it crashes once h == hs because C is then zero and the 
> DiffusionTerm gets an infinite coefficient. > Multiplying the equation 
> through by C does not work for some reason; Wheeler might know why.

Thanks! That certainly is simpler. I had seen something like that
approach in the archives of the mailing list, but without the
conditions incorporated. I was confused by the rules or convention
that FiPy uses to do lazy evaluation of expressions - allowing
functional dependence. I guess I assumed it also handled functions
lazily.

I had started off with C multiplied through, and I was having problems
with the convection term.  I believe the remaining problems are
related to my expressions for the parameters, and not their
implementation.

Kris
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


piecewise functional parameter dependence

2012-10-11 Thread Kris Kuhlman
FiPy list,

I am trying to simulate a non-linear problem with piecewise-defined
parameters, depending on the range of the dependent variable.

What is the best way to specify this?  In my example below, I used a
function and an intermediate CellVariable, using the "where" argument.
 I have also tried specifying the functions as one-liners using the
numerix.where(h>hc,, ) function, but this
produces an error because the function does not return the right type
of result.

I have not seen any examples with this type of behavior. Is there a
best way to do this? The way I am doing it seems sub-optimal.

I am also confused because putting a debugging print statement into
the functions that define the parameters (e.g., C(h)) only prints out
the message once when the problem is first set up. Are these functions
not getting called each time to re-compute the parameters as the main
variables are changing? Is FiPy stripping out or caching the print
statements?

I appreciate any help or pointers to applicable documentation and examples.

Kris

--

from fipy import *
import numpy as np

L = 10.0
nx = 75
dx = L/nx
timeStep = dx/10.0

alpha = 0.5
hs = -1.0/alpha
n = 0.2
expt = -(2 + 3*n)
Ks = 10.0
qs = 0.33
qr = 0.13

steps = 150
mesh = Grid1D(dx=dx, nx=nx)

h = CellVariable(name="$\psi$", mesh=mesh, value=-100.0, hasOld=True)
Se = CellVariable(name="$S_e$", mesh=mesh, value=0.0)

h.constrain(hs, where=mesh.facesLeft)
h.faceGrad.constrain((5*Ks,), where=mesh.facesRight)

def C(h):
r = CellVariable(mesh=mesh)
result = -(qs-qr)*n*(-alpha*h)**(-n)/h

r.setValue(result)
r.setValue(0.0, where=h>=hs)
return r

def K(h):
r = CellVariable(mesh=mesh)
result = Ks*(-alpha*h)**expt

r.setValue(result)
r.setValue(Ks, where=h>=hs)
return r

def fSe(h):
r = CellVariable(mesh=mesh)
result = qr + (qs-qr)*(-alpha*h)**(-n)

r.setValue(result)
r.setValue(qs, where=h>=hs)
return r


# Richards' equation (positive up/right)
Req = (TransientTerm(coeff=1.0) ==
   DiffusionTerm(coeff=K(h)/C(h)) +
   VanLeerConvectionTerm(coeff=[1.0]))

viewer = Viewer(vars=Se, datamin=qr, datamax=qs+0.02)
viewer.plot()

for step in range(steps):
h.updateOld()
Req.solve(dt=timeStep,var=h)

res = 100.0
while res > 1.0E-5:
res = Req.sweep(dt=timeStep,var=h)

Se.setValue(fSe(h))
viewer.plot()
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]