Re: Problem in dump.write for vector variables

2019-12-19 Thread Keller, Trevor (Fed) via fipy
The error message indicates that `dump.read` got something unexpected
-- namely, a `value` field -- and threw.

Naïvely, dumping `phi.faceGrad.value` instea dof `phi.faceGrad`
appears to do the trick.

-- 
Trevor Keller, Ph.D.
Materials Science and Engineering Division
National Institute of Standards and Technology
100 Bureau Dr. MS 8550; Gaithersburg, MD 20899
Office: 223/A131 or (301) 975-2889

On Thu, Dec 19, 2019 at 04:26:07PM +0100, Marcel UJI wrote:
>I'm getting errors dumping vector variables in fipy. This minimal code
>shows the problem:
> 
>from fipy import *
> 
>mesh=Grid3D(nx=10,ny=10,nz=10)
> 
>phi=CellVariable(mesh=mesh,value=(mesh.x*mesh.y))
> 
>dump.write({'E' : (phi.faceGrad)},filename='prova.gz',extension='.gz')
> 
>fitxer=dump.read('prova.gz')
> 
>produces:
> 
>---
>
>TypeError Traceback (most recent call
>last)
> in ()
>> 1 fitxer=dump.read('prova.gz')
>/home/marcel/.local/lib/python2.7/site-packages/fipy/tools/dump.pyc in
>read(filename, fileobject, communicator, mesh_unmangle)
>123 unpickler.find_global = find_class
>124
>--> 125 return unpickler.load()
>126
>127 def _test():
>/usr/lib/python2.7/pickle.pyc in load(self)
>862 while 1:
>863 key = read(1)
>--> 864 dispatch[key](self)
>865 except _Stop, stopinst:
>866 return stopinst.value
>/usr/lib/python2.7/pickle.pyc in load_build(self)
>   1221 setstate = getattr(inst, "__setstate__", None)
>   1222 if setstate:
>-> 1223 setstate(state)
>   1224 return
>   1225 slotstate = None
>/home/marcel/.local/lib/python2.7/site-packages/fipy/variables/variable
>.pyc in __setstate__(self, dict)
>   1525 self._refcount = sys.getrefcount(self)
>   1526
>-> 1527 self.__init__(**dict)
>   1528
>   1529 def _test(self):
>TypeError: __init__() got an unexpected keyword argument 'value'
> 
>Any idea on what is wrong in this code??
> 
>Marcel
> --
> Dr. Marcel Aguilella-Arzo
> Professor Titular d'Universitat, Física Aplicada
> Coordinador de la Subcomissió d'Especialitat de Ciències Experimentals i 
> Tecnolo
> gia
> del Màster Universitari en Professor d'Educació Secundària Obligatòria i 
> Batxill
> erat,
> Formació Professional i Ensenyament d'Idiomes
> Departament de Física
> Escola Superior de Tecnologia i Ciències Experimentals
> Universitat Jaume I
> Av. Sos Baynat, s/n
> 12071 Castelló de la Plana (Spain)
> +34 964 728 046
> [1]a...@fca.uji.es
> 
> References
> 
>1. mailto:a...@fca.uji.es

> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: FiPy on Linux

2018-07-20 Thread Keller, Trevor (Fed)
Hi Carsten,


Unfortunately, I can confirm that the Conda-based process does not currently 
work on

Debian 9.


$ ./Miniconda3-latest-Linux-x86_64.sh

$ cat /absolute/path/to/miniconda/etc/profile.d/conda.sh >> ~/.bashrc

$ . ~/.bashrc

$ conda update conda

$ conda create --name fipy --channel guyer --channel conda-forge python=2.7 
numpy scipy gmsh pysparse mpi4py matplotlib mayavi fipytrilinos weave

$ conda activate fipy

$ python

Python 2.7.15 | packaged by conda-forge | (default, May  8 2018, 14:46:53)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import fipy
Traceback (most recent call last):
  File "", line 1, in 
ImportError: No module named fipy
>>>


I can partially work around this problem by hybridizing the Conda and GitHub 
installation methods,

i.e. let Conda install the prereqs, then install FiPy from source.

$ conda activate fipy

$ conda install -n fipy gmsh

$ git clone https://github.com/usnistgov/fipy

$ cd fipy

$ python setup.py install

$ python
Python 2.7.15 | packaged by conda-forge | (default, May  8 2018, 14:46:53)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-15)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> from fipy import test
>>> test()
...
--
Ran 347 tests in 4.113s

OK
!!!
Skipped 105 doctest examples because `gmsh` cannot be found on the $PATH
Skipped 83 doctest examples because the `tvtk` package cannot be imported
!!!
>>>

The failure to recognize gmsh -- which is most definitely present on $PATH -- 
is vexing.
Thanks for bringing this to our attention.


Trevor

From: fipy-boun...@nist.gov  on behalf of Carsten 
Langrock 
Sent: Thursday, July 19, 2018 9:20:54 PM
To: FIPY
Subject: FiPy on Linux

Hi,

Since the macOS install didn’t result in much, I tried it under Linux. No luck, 
it’s actually even worse. I cannot import fipy without generating errors. It 
appears to hang on

‘Could not import any solver package’

I saw that some reported the same problem but I didn’t see a solution. SciPy is 
installed, of course. This is following the recommended installation modality 
using miniconda.

Carsten

_
Dipl.-Phys. Carsten Langrock, Ph.D.

Senior Research Scientist
Edward L. Ginzton Laboratory, Rm. 202
Stanford University

348 Via Pueblo Mall
94305 Stanford, CA

Tel. (650) 723-0464
Fax (650) 723-2666

Ginzton Lab Shipping Address:
James and Anna Marie Spilker Engineering and Applied Sciences Building
04-040
348 Via Pueblo Mall
94305 Stanford, CA
_
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: Implementation of KKS-model in FiPy

2017-09-15 Thread Keller, Trevor (Fed)
Hi Anders,


Your interpretation is correct: when $\phi = 1$, $c = c_S$; and when $\phi = 
0$, $c = c_L$.

The lookup-table visualization does show $c_S=c_L=0$ when $\phi=0$ and 
$c_S=c_L=1$ when $\phi=1$, which is not correct. Looking at the linescan plots 
further down in the ipynb file, when $phi=0$, $c\neq 0$ and when $\phi=1$, 
$c\neq 1$, so something inconsistent is going on -- hopefully in the 
visualization. I will take a closer look: thank you for bringing this 
discrepancy to my attention.


Trevor


From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Anders 
Ericsson <anders.erics...@solid.lth.se>
Sent: Thursday, September 14, 2017 11:02:16 AM
To: FIPY
Subject: Re: Implementation of KKS-model in FiPy


Hi Trevor,


I have a question regarding the LookupTable that is being generated from the 
code in https://github.com/tkphd/KKS-binary-solidification.

The code is generated from the conditions that $c = h(\phi)c_S + (1 - 
h(\phi))c_L$ and $\frac{\partial f_L}{\partial c_L} = \frac{\partial 
f_S}{\partial c_S}$, which implies that $c = c_S$ when $\phi = 1$ and $c = c_L$ 
when $\phi = 0$.


But in the generated table it seems like $c_S = c_L = 0$ (blue contour) when 
$\phi = 0$, while $c$ varies between 0 to 1. The same thing happens on the 
opposite side of the plots, when $\phi = 1$ and $c_S = c_L = 1$ (red contour) 
while $c$ varies between 0 to 1.


Is this wrong or am I interpreting the table in a wrong way? I have attached an 
image of the plots for convenience.


Thank you and best regards,


Anders


From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Anders 
Ericsson <anders.erics...@solid.lth.se>
Sent: Monday, September 11, 2017 11:08:57 AM
To: FIPY
Subject: Re: Implementation of KKS-model in FiPy


Hi Trevor and Daniel,


Thank you for your answers, It's very much appreciated.


I will have a deeper look at the references you sent me and get back to you in 
case I have any further questions.


Best regards,

Anders


From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Keller, Trevor 
(Fed) <trevor.kel...@nist.gov>
Sent: Friday, September 8, 2017 9:09:35 PM
To: FIPY
Subject: Re: Implementation of KKS-model in FiPy


Hi Anders,


I have an implementation of the binary KKS solidification model written up in a 
repository, https://github.com/tkphd/KKS-binary-solidification. It's based on 
Phase-Field Methods in Materials Science and Engineering by Provatas and Elder, 
chapter 6.9 and appendix C.3. It's a good introduction to the numerical 
approach. A more involved KKS model (ternary alloy with three phases) is 
written up in another repository, 
https://github.com/usnistgov/phasefield-precipitate-aging, but the 
implementation does not involve FiPy.


As Daniel mentioned, the equations of motion $\frac{\partial c}{\partial t}$ 
and $\frac{\partial \phi_i}{\partial t}$ depend on the fictitious compositions 
$c_j$ in each phase, but since the $c_j$ are equilibrium values, their solution 
is time-independent. Therefore, the standard approach presented by Provatas and 
Elder is to use a Newton-Raphson solver that takes $c$ and $\phi_i$ as constant 
inputs, and iteratively solves for the $c_j$ that satisfy the constraints on 
$c$ and $\mu$. For relatively well-behaved systems, as in the binary KKS 
repository, you can build a lookup table before launching the simulation and 
simply interpolate between known values instead of actually solving the 
constrained equilibrium at each point during the simulation. For your model, 
such a lookup table would be a vector-valued 2D matrix with n+1 fictitious 
compositions per entry. Building it may take some time, but the runtime 
performance can be quite good with this approach.

I hope that helps. Jon Guyer may be able to comment more clearly on the FiPy 
implementation details.

Best regards,
Trevor



Trevor Keller, Ph.D.
Materials Science and Engineering Division
National Institute of Standards and Technology
100 Bureau Dr. MS 8550; Gaithersburg, MD 20899
Office: 223/A131 or (301) 975-2889



From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Daniel Wheeler 
<daniel.wheel...@gmail.com>
Sent: Friday, September 8, 2017 11:37:20 AM
To: FIPY
Subject: Re: Implementation of KKS-model in FiPy

Hi Anders,

Just to simplify this, would the following help. Let's consider

\frac{\partial c_1}{\partial t} = F(c_1, c_2, c_3) \\
\frac{\partial c_2}{\partial t} = G(c_1, c_2, c_3) \\
c_3 = H(c_1, c_2)

Basically, you have two variables solved by PDEs and a third solved by
an equation that is independent of time. Would understanding that help
you with the larger problem?

Cheers,

Daniel

On Thu, Sep 7, 2017 at 9:56 AM, Anders Ericsson
<anders.erics...@solid.lth.se> wrote:
> Dear all,
>
>
> I currently am working on a phase-f

Re: trilinos solver error

2016-12-02 Thread Keller, Trevor (Fed)
As far as I know, yes. There's also a good rundown of build options on SCOREC:

https://redmine.scorec.rpi.edu/projects/albany-rpi/wiki/Installing_Trilinos


Trevor


From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Daniel Lewis 
<lewi...@rpi.edu>
Sent: Friday, December 2, 2016 12:52:47 PM
To: FIPY
Subject: Re: trilinos solver error

Trevor,

Are your previous build instructions still current?  Just wondering if I can 
reproduce either of these outcomes?

Dan

On Dec 2, 2016, at 11:39 AM, Keller, Trevor (Fed) <trevor.kel...@nist.gov> 
wrote:

> Hi Shaun,
>
> I am unable to reproduce this error in my environment, using FiPy version 
> 3.1.dev134+g64f7866 and PyTrilinos version12.3;(;Dev;). Here's the output I 
> get:
>
> $ mpirun -np 1 python examples/cahnHilliard/mesh2DCoupled.py --trilinos
> .../lib/python2.7/site-packages/matplotlib/collections.py:590: FutureWarning: 
> elementwise comparison failed; returning scalar instead, but in the future 
> will perform elementwise comparison
>   if self._edgecolors == str('face'):
>
> Coupled equations. Press  to proceed...
> False
>
> Does the example successfully run for you without Trilinos? without MPI? This 
> looks like a build environment problem, but I'm not sure exactly where.
>
> Trevor
> From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Shaun Mucalo 
> <shaun.muc...@pg.canterbury.ac.nz>
> Sent: Thursday, December 1, 2016 7:37:34 PM
> To: FIPY
> Subject: Fwd: trilinos solver error
>
>
>
> Hello,
> I am trying to solve a problem with fipy using trilinos
> solvers. However, when I run a problem with a 2D mesh I get the
> following trilinos error:
>
>
> $ python fipy/examples/cahnHilliard/mesh2DCoupled.py --trilinos
> Error in dorgqr on 0 row (dims are 800, 1)
>
> Error in CoarsenMIS: dorgqr returned a non-zero
> n--
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode 1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --
>
>
> I get a similar error when running the doctests (output of "mpirun -np 1
> python setup.py test --trilinos" is attached). I have manually tested
> ~20 of the examples, it appears the 1D cases work fine and fail on the
> 2D cases, for any value of -np.
>
> Incidentally, running the tests at -np > 1 and the tests stall at
> "Doctest:fipy.terms.abstractDiffusionTerm._AbstractDiffusionTerm._buildMatrix...".
> I am not sure if this is related.
>
> Does anyone have any advice? Is this unique to my installation, or is it
> a known issue? Please let me know if there is any other information I
> can provide that could be useful.
>
> Regards,
> Shaun Mucalo
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: trilinos solver error

2016-12-02 Thread Keller, Trevor (Fed)
Hi Shaun,


I am unable to reproduce this error in my environment, using FiPy version 
3.1.dev134+g64f7866 and PyTrilinos version 12.3;(;Dev;). Here's the output I 
get:


$ mpirun -np 1 python examples/cahnHilliard/mesh2DCoupled.py --trilinos

.../lib/python2.7/site-packages/matplotlib/collections.py:590: FutureWarning: 
elementwise comparison failed; returning scalar instead, but in the future will 
perform elementwise comparison
  if self._edgecolors == str('face'):

Coupled equations. Press  to proceed...
False


Does the example successfully run for you without Trilinos? without MPI? This 
looks like a build environment problem, but I'm not sure exactly where.


Trevor


From: fipy-boun...@nist.gov  on behalf of Shaun Mucalo 

Sent: Thursday, December 1, 2016 7:37:34 PM
To: FIPY
Subject: Fwd: trilinos solver error



Hello,
I am trying to solve a problem with fipy using trilinos
solvers. However, when I run a problem with a 2D mesh I get the
following trilinos error:


$ python fipy/examples/cahnHilliard/mesh2DCoupled.py --trilinos
Error in dorgqr on 0 row (dims are 800, 1)

Error in CoarsenMIS: dorgqr returned a non-zero
n--
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--


I get a similar error when running the doctests (output of "mpirun -np 1
python setup.py test --trilinos" is attached). I have manually tested
~20 of the examples, it appears the 1D cases work fine and fail on the
2D cases, for any value of -np.

Incidentally, running the tests at -np > 1 and the tests stall at
"Doctest:fipy.terms.abstractDiffusionTerm._AbstractDiffusionTerm._buildMatrix...".
I am not sure if this is related.

Does anyone have any advice? Is this unique to my installation, or is it
a known issue? Please let me know if there is any other information I
can provide that could be useful.

Regards,
Shaun Mucalo
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: Memory Leaks with Trilinos

2016-10-12 Thread Keller, Trevor (Fed)
Hi Mike,


Thanks for testing all those combinations! Sorry none worked though. For me, 
the specific combination of Trilinos 11.10.2 with swig 2.0.8 closed the memory 
leak back in March. I'm surprised the latest versions (6 months later) still 
create a leak. It makes me suspect we're doing something wrong, perhaps 
neglecting some important module in Trilinos? I'll try building the latest 
release and, if the leak persists on my machine as well, try a bit harder to 
track it down. If nothing else, I'll file a bug report and try to get more 
developers' eyes on this.


My software environment is very similar to yours: Boost v1.55, OpenMPI 1.6.5, 
and gcc 4.9.2.


With regards,

Trevor



Trevor Keller, Ph.D.
Materials Science and Engineering Division
National Institute of Standards and Technology
100 Bureau Dr. MS 8550; Gaithersburg, MD 20899
Office: 223/A131 or (301) 975-2889



From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Michael Waters 
<waters.mik...@gmail.com>
Sent: Tuesday, October 11, 2016 4:55:51 PM
To: FIPY
Subject: Re: Memory Leaks with Trilinos


Hi Trevor,

My mystery deepens. Today, I've compiled and tried the following combinations 
of Trilinos/Swig:

12.8.1/3.0.2

11.14.3/3.0.2

11.12.1/3.0.2

11.10.1/2.0.8

11.2.5/2.0.8

All of these combinations leak memory at about the same rate. I am using Boost 
1.54, Openmpi 1.6.5, and GCC 4.8.4. Are you using similar?

Thanks,

-Mike

On 03/30/2016 05:06 PM, Keller, Trevor (Fed) wrote:

Mike & Jon,

Running 25 steps on Debian wheezy using PyTrilinos 11.10.2 and swig 2.0.8, 
monitoring with memory-profiler, I do not see a memory leak (see attachment): 
the simulation is fairly steady at 8GB RAM. The same code, but using PyTrilinos 
12.1.0, ramps up to 16GB in the same simulation time.

Later revisions of Trilinos 12 remain defiant (though, compiling swig-3.0.8 has 
improved matters). If one of them compiles and works, I will let you know. If 
not, if possible, Mike, can you test the next incremental version (11.10.2)?

Trevor



From: fipy-boun...@nist.gov<mailto:fipy-boun...@nist.gov> 
<fipy-boun...@nist.gov><mailto:fipy-boun...@nist.gov> on behalf of Michael 
Waters <waters.mik...@gmail.com><mailto:waters.mik...@gmail.com>
Sent: Wednesday, March 30, 2016 4:36 PM
To: FIPY
Subject: Re: Memory Leaks with Trilinos

Hi Jon,

I just compiled an old version of swig (2.0.8) and compiled Trilinos
(11.10.1) against that. Sadly, I am still having the leak.

I am out of ideas for the day... and should be looking for a post doc
anyway.

Thanks,
-mike


On 3/30/16 3:32 PM, Guyer, Jonathan E. Dr. (Fed) wrote:


No worries. If building trilinos doesn't blindside you with something 
unexpected and unpleasant, you're not doing it right.

I have a conda recipe at 
https://github.com/guyer/conda-recipes/tree/trilinos_upgrade_11_10_2/trilinos 
that has worked for me to build 11.10.2 on both OS X and Docker (Debian?). I 
haven't tried to adjust it to 12.x, yet.


On Mar 30, 2016, at 2:42 PM, Michael Waters 
<waters.mik...@gmail.com><mailto:waters.mik...@gmail.com> wrote:



Hi Jon,

I was just reviewing my version of Trilinos 11.10 and discovered that there is 
no way that I compiled it last night after exercising. It has unsatisfied 
dependencies on my machine. So I must apologize, I must have been more tired 
than I thought.

Sorry for the error!
-Mike Waters

On 3/30/16 11:52 AM, Guyer, Jonathan E. Dr. (Fed) wrote:


It looked to me like steps and accuracy were the way to do it, but my runs 
finish in one step, so I was confused. When I change to accuracy = 10.0**-6, it 
takes 15 steps, but still no leak (note, the hiccup in RSS and in ELAPSED time 
is because I put my laptop to sleep for awhile, but VSIZE is rock-steady).

The fact that things never (or slowly) converge for you and Trevor, in addition 
to the leak, makes me wonder if Trilinos seriously broke something between 11.x 
and 12.x. Trevor's been struggling to build 12.4. I'll try to find time to do 
the same.

In case it matters, I'm running on OS X. What's your system?

- Jon

On Mar 29, 2016, at 3:59 PM, Michael Waters 
<waters.mik...@gmail.com><mailto:waters.mik...@gmail.com> wrote:



When I did my testing and made those graphs, I ran Trilinos in serial.
Syrupy didn't seem to track the other processes memory. I watched in
real time as the parallel version ate all my ram though.

To make the program run longer while not changing the memory:

steps = 100  # increase this, (limits the number of self-consistent
iterations)
accuracy = 10.0**-5 # make this number smaller, (relative energy
eigenvalue change for being considered converged )
initial_solver_iterations_per_step = 7 # reduce this to 1,  (number of
solver iterations per self-consistent iteration, to small and it's slow,
to high and the solutions are not stable)

I did those tests on a 

Re: Diffusion-Convection-Reactive Chemisty

2016-07-13 Thread Keller, Trevor (Fed)
Staring at the code, it looks like you're defining initial values (C_a_0, 
C_b_0, ..., V0) as CellVariables instead of scalars (or Variables), which is 
probably not what you meant. Substituting the following seems to work better:

#%% - INITIATION VALUES
C_a_0 = 0
C_b_0 = 0
C_c_0 = 0
C_d_0 = 0
V0 = 409.5183981003883 #V_flow/A_crossSection #m/hr

#%% - VARIABLES
C_a= CellVariable(name="Concentration of A", mesh=mesh, hasOld=True)
C_b= CellVariable(name="Concentration of B", mesh=mesh, hasOld=True)
C_c= CellVariable(name="Concentration of C", mesh=mesh, hasOld=True)
C_d= CellVariable(name="Concentration of D", mesh=mesh, hasOld=True)
V  = CellVariable(name="Velocity", mesh=mesh, hasOld=True)

C_a.setValue(C_a_0)
C_b.setValue(C_b_0)
C_c.setValue(C_c_0)
C_d.setValue(C_d_0)
V.setValue(V0)

But this still gives errors relating to division by zero and singular matrices. 
I think this is because the last term in each of your equations (factor*r_a) 
needs to be an appropriate SourceTerm, since
r_a = -k1 * C_a
and you're solving for C_a. I substituted

#%% - DEFINE EQUATIONS
EqC_a = 
(TransientTerm(var=C_a)==-ExponentialConvectionTerm(coeff=V_vec,var=C_a) + 
DiffusionTerm(var=C_b,coeff=Da) + ImplicitSourceTerm(var=C_a,coeff=1))
EqC_b = 
(TransientTerm(var=C_b)==-ExponentialConvectionTerm(coeff=V_vec,var=C_b) + 
DiffusionTerm(var=C_b,coeff=Da) + ImplicitSourceTerm(var=C_a,coeff=11))
EqC_c = 
(TransientTerm(var=C_c)==-ExponentialConvectionTerm(coeff=V_vec,var=C_c) + 
DiffusionTerm(var=C_c,coeff=Da) + ImplicitSourceTerm(var=C_a,coeff=-17))
EqC_d = 
(TransientTerm(var=C_d)==-ExponentialConvectionTerm(coeff=V_vec,var=C_d) + 
DiffusionTerm(var=C_d,coeff=Da) + ImplicitSourceTerm(var=C_a,coeff=-8))

Eq = EqC_a & EqC_b & EqC_c & EqC_d

This still returns an error,

/users/tnk10/Downloads/fipy/fipy/tools/numerix.py:966: RuntimeWarning: overflow 
encountered in square
  return sqrt(add.reduce(arr**2))
/users/tnk10/Downloads/fipy/fipy/solvers/pysparse/linearLUSolver.py:93: 
RuntimeWarning: overflow encountered in square
  error0 = numerix.sqrt(numerix.sum((L * x - b)**2))
/users/tnk10/Downloads/fipy/fipy/solvers/pysparse/linearLUSolver.py:98: 
RuntimeWarning: overflow encountered in square
  if (numerix.sqrt(numerix.sum(errorVector**2)) / error0)  <= self.tolerance:
/users/tnk10/Downloads/fipy/fipy/solvers/pysparse/linearLUSolver.py:98: 
RuntimeWarning: invalid value encountered in double_scalars
  if (numerix.sqrt(numerix.sum(errorVector**2)) / error0)  <= self.tolerance:
/users/tnk10/Downloads/fipy/fipy/variables/faceGradVariable.py:158: 
RuntimeWarning: overflow encountered in divide
  N = (N2 - numerix.take(self.var,id1, axis=-1)) / dAP
/users/tnk10/Downloads/fipy/fipy/variables/variable.py:1105: RuntimeWarning: 
invalid value encountered in multiply
  return self._BinaryOperatorVariable(lambda a,b: a*b, other)

Assuming (perhaps incorrectly) that component A should be diffusing, I changed 
the DiffusionTerm
EqC_a = 
(TransientTerm(var=C_a)==-ExponentialConvectionTerm(coeff=V_vec,var=C_a) + 
DiffusionTerm(var=C_a,coeff=Da) + ImplicitSourceTerm(var=C_a,coeff=1))

That seems to work. The modified script is attached.

Trevor

____
From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Keller, Trevor 
(Fed) <trevor.kel...@nist.gov>
Sent: Wednesday, July 13, 2016 4:26:38 PM
To: FIPY
Subject: Re: Diffusion-Convection-Reactive Chemisty


Is the definition of C_a_BC correct? For a 1D grid, is the behavior of

C_a.constrain(C_a_BC, where=mesh.facesRight)

with a CellVariable instead of a scalar value

C_a_BC= C_a_0*(1-X)

meaningful?


Trevor


From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Daniel 
DeSantis <desan...@gmail.com>
Sent: Wednesday, July 13, 2016 4:08:50 PM
To: FIPY
Subject: Re: Diffusion-Convection-Reactive Chemisty

I'm sorry. I was trying to fix the problem, and forgot to put a line back in, 
which was masked by me not clearing a variable value for V. My apologies. 
Please try this code instead. It has the traceback I mentioned before.



Traceback (most recent call last):

  File "", line 1, in 
runfile('C:/Users/ddesantis/Dropbox/PythonScripts/CFD 
Models/ConversionModel.py', wdir='C:/Users/ddesantis/Dropbox/PythonScripts/CFD 
Models')

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py",
 line 699, in runfile
execfile(filename, namespace)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py",
 line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)

  File "C:/Users/ddesantis/Dropbox/PythonScripts/CFD 
Models/ConversionModel.py", line 110, in 
res = Eq.sweep(d

Re: Diffusion-Convection-Reactive Chemisty

2016-07-13 Thread Keller, Trevor (Fed)
Is the definition of C_a_BC correct? For a 1D grid, is the behavior of

C_a.constrain(C_a_BC, where=mesh.facesRight)

with a CellVariable instead of a scalar value

C_a_BC= C_a_0*(1-X)

meaningful?


Trevor


From: fipy-boun...@nist.gov  on behalf of Daniel 
DeSantis 
Sent: Wednesday, July 13, 2016 4:08:50 PM
To: FIPY
Subject: Re: Diffusion-Convection-Reactive Chemisty

I'm sorry. I was trying to fix the problem, and forgot to put a line back in, 
which was masked by me not clearing a variable value for V. My apologies. 
Please try this code instead. It has the traceback I mentioned before.



Traceback (most recent call last):

  File "", line 1, in 
runfile('C:/Users/ddesantis/Dropbox/PythonScripts/CFD 
Models/ConversionModel.py', wdir='C:/Users/ddesantis/Dropbox/PythonScripts/CFD 
Models')

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py",
 line 699, in runfile
execfile(filename, namespace)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py",
 line 74, in execfile
exec(compile(scripttext, filename, 'exec'), glob, loc)

  File "C:/Users/ddesantis/Dropbox/PythonScripts/CFD 
Models/ConversionModel.py", line 110, in 
res = Eq.sweep(dt=dt, solver=solver)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\term.py",
 line 236, in sweep
solver = self._prepareLinearSystem(var=var, solver=solver, 
boundaryConditions=boundaryConditions, dt=dt)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\term.py",
 line 170, in _prepareLinearSystem
buildExplicitIfOther=self._buildExplcitIfOther)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\coupledBinaryTerm.py",
 line 122, in _buildAndAddMatrices
buildExplicitIfOther=buildExplicitIfOther)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\binaryTerm.py",
 line 68, in _buildAndAddMatrices
buildExplicitIfOther=buildExplicitIfOther)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\binaryTerm.py",
 line 68, in _buildAndAddMatrices
buildExplicitIfOther=buildExplicitIfOther)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\binaryTerm.py",
 line 68, in _buildAndAddMatrices
buildExplicitIfOther=buildExplicitIfOther)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\unaryTerm.py",
 line 99, in _buildAndAddMatrices
diffusionGeomCoeff=diffusionGeomCoeff)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\terms\abstractConvectionTerm.py",
 line 211, in _buildMatrix
self.constraintB =  -((1 - alpha) * var.arithmeticFaceValue * 
constraintMask * exteriorCoeff).divergence * mesh.cellVolumes

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\variables\variable.py",
 line 1151, in __mul__
return self._BinaryOperatorVariable(lambda a,b: a*b, other)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\variables\variable.py",
 line 1116, in _BinaryOperatorVariable
if not v.unit.isDimensionless() or len(v.shape) > 3:

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\variables\variable.py",
 line 255, in _getUnit
return self._extractUnit(self.value)

  File 
"C:\Users\ddesantis\AppData\Local\Continuum\Anaconda2\lib\site-packages\fipy\variables\variable.py",
 line 561, in _getValue
value[..., mask] = numerix.array(constraint.value)[..., mask]

IndexError: index 100 is out of bounds for axis 0 with size 100

On Wed, Jul 13, 2016 at 3:59 PM, Daniel Wheeler 
> wrote:
Hi Daniel,

It is giving a different error for me

Traceback (most recent call last):
  File "ConversionModel.py", line 78, in 
V.constrain(V0,mesh.facesLeft)
NameError: name 'V' is not defined

Is this the correct script?



On Wed, Jul 13, 2016 at 3:27 PM, Daniel DeSantis 
> wrote:
> Hello,
>
> I am having some trouble getting a workable convection coefficient on a
> reactive chemistry model with a vector velocity. I have reviewed several
> examples from the FiPy mailing list and tried several of the variations
> listed there. Originally, I was receiving the error that a coefficient must
> be a vector. Now I seem to be getting an error that says the index is out of
> bounds. Specific traceback is shown below, along with the code attachment.
>
> Could anyone suggest what I am doing wrong and how to fix it?
>
> I'm relatively new to Python and FiPy, specifically, so please bear with 

Re: globalValue in parallel

2016-04-27 Thread Keller, Trevor (Fed)
Looking into the rest of the FiPy source, we're already calling 
allgather(sendobj) in several places, and rarely calling allgather(sendobj, 
recvobj). To preserve the existing function calls (all of which are lower-case) 
and mess with the code the least, removing the recvobj argument appears to be 
the right call after all.

Working on the PR.

Trevor


From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer, 
Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
Sent: Wednesday, April 27, 2016 4:39:05 PM
To: FIPY
Subject: Re: globalValue in parallel

It sounds like you're volunteering to put together the pull request with 
appropriate tests

> On Apr 27, 2016, at 4:06 PM, Keller, Trevor (Fed) <trevor.kel...@nist.gov> 
> wrote:
>
> The mpi4py commit mentions that the receive object is no longer needed for 
> the lower-case form of the commands. Browsing the full source shows that the 
> upper-case commands retain both the send and receive objects. To avoid 
> deviating too far from the MPI standard, I'd like to suggest changing the 
> case (Allgather instead of allgather), rather than dropping buffers, in our 
> mpi4pyCommWrapper.py.
>
> Trevor
>
>
> 
> From: fipy-boun...@nist.gov <fipy-boun...@nist.gov> on behalf of Guyer, 
> Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov>
> Sent: Wednesday, April 27, 2016 3:53:39 PM
> To: FIPY
> Subject: Re: globalValue in parallel
>
> It looks like 'recvobj' was removed from mpi4py about two years ago:
>
> https://bitbucket.org/mpi4py/mpi4py/commits/3d8503a11d320dd1c3030ec0dbce95f63b0ba602
>
> but I'm not sure when it made it into the released version.
>
>
> It looks like you can safely edit fipy/tools/comms/mpi4pyCommWrapper.py to 
> remove the 'recvobj' argument.
>
>
> We'll do some tests and push a fix as soon as possible. Thanks for alerting 
> us to the issue.
>
> Filed as https://github.com/usnistgov/fipy/issues/491
>
>
>> On Apr 27, 2016, at 2:23 PM, Kris Kuhlman <kristopher.kuhl...@gmail.com> 
>> wrote:
>>
>> I built the trilinos-capable version of fipy. It seems to work for serial 
>> (even for a non-trivial case), but I am getting errors with more than one 
>> processor with a simple call to globalValue(), which I was trying to use to 
>> make a plot by gathering the results to procID==0
>>
>> I used the latest git version of mpi4py and trilinos. Am I doing something 
>> wrong (is there a different preferred way to gather things to a single 
>> processor to save or make plots?) or do I need to use a specific version of 
>> these packages and rebuild?  It seems the function is expecting something 
>> with a different interface or call structure.
>>
>> Kris
>>
>> python test.py
>> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  
>> 1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.]
>>
>> `--> ~/local/trilinos-fipy/anaconda/bin/mpirun -np 1 python test.py
>> hello from 0 out of 1 [ 1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  
>> 1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.  1.
>>  1.  1.]
>>
&g

Re: Memory Leaks with Trilinos

2016-03-30 Thread Keller, Trevor (Fed)
Thanks Jon, building swig v. 2.0.8 and using your cmake did the trick for 
11.10.2. Running the DFT program now.



Trevor



From: fipy-boun...@nist.gov  on behalf of Guyer, 
Jonathan E. Dr. (Fed) 
Sent: Wednesday, March 30, 2016 3:32 PM
To: FIPY
Subject: Re: Memory Leaks with Trilinos

No worries. If building trilinos doesn't blindside you with something 
unexpected and unpleasant, you're not doing it right.

I have a conda recipe at 
https://github.com/guyer/conda-recipes/tree/trilinos_upgrade_11_10_2/trilinos 
that has worked for me to build 11.10.2 on both OS X and Docker (Debian?). I 
haven't tried to adjust it to 12.x, yet.


On Mar 30, 2016, at 2:42 PM, Michael Waters  wrote:

> Hi Jon,
>
> I was just reviewing my version of Trilinos 11.10 and discovered that there 
> is no way that I compiled it last night after exercising. It has unsatisfied 
> dependencies on my machine. So I must apologize, I must have been more tired 
> than I thought.
>
> Sorry for the error!
> -Mike Waters
>
> On 3/30/16 11:52 AM, Guyer, Jonathan E. Dr. (Fed) wrote:
>> It looked to me like steps and accuracy were the way to do it, but my runs 
>> finish in one step, so I was confused. When I change to accuracy = 10.0**-6, 
>> it takes 15 steps, but still no leak (note, the hiccup in RSS and in ELAPSED 
>> time is because I put my laptop to sleep for awhile, but VSIZE is 
>> rock-steady).
>>
>> The fact that things never (or slowly) converge for you and Trevor, in 
>> addition to the leak, makes me wonder if Trilinos seriously broke something 
>> between 11.x and 12.x. Trevor's been struggling to build 12.4. I'll try to 
>> find time to do the same.
>>
>> In case it matters, I'm running on OS X. What's your system?
>>
>> - Jon
>>
>> On Mar 29, 2016, at 3:59 PM, Michael Waters  wrote:
>>
>>> When I did my testing and made those graphs, I ran Trilinos in serial.
>>> Syrupy didn't seem to track the other processes memory. I watched in
>>> real time as the parallel version ate all my ram though.
>>>
>>> To make the program run longer while not changing the memory:
>>>
>>> steps = 100  # increase this, (limits the number of self-consistent
>>> iterations)
>>> accuracy = 10.0**-5 # make this number smaller, (relative energy
>>> eigenvalue change for being considered converged )
>>> initial_solver_iterations_per_step = 7 # reduce this to 1,  (number of
>>> solver iterations per self-consistent iteration, to small and it's slow,
>>> to high and the solutions are not stable)
>>>
>>> I did those tests on a machine with 128 GB of ram so I wasn't expecting
>>> any swapping.
>>>
>>> Thanks,
>>> -mike
>>>
>>>
>>> On 3/29/16 3:38 PM, Guyer, Jonathan E. Dr. (Fed) wrote:
 I guess I spoke too soon. FWIW, I'm running Trilinos version: 11.10.2.


 On Mar 29, 2016, at 3:34 PM, Guyer, Jonathan E. Dr. (Fed) 
  wrote:

> I'm not seeing a leak. The below is for trilinos. VSIZE grows to about 11 
> MiB and saturates and RSS saturates at around 5 MiB. VSIZE is more 
> relevant for tracking leaks, as RSS is deeply tied to your system's 
> swapping architecture and what else is running; either way, neither seems 
> to be leaking, but this problem does use a lot of memory.
>
> What do I need to do to get it to run longer?
>
>
>
> On Mar 25, 2016, at 7:16 PM, Michael Waters  
> wrote:
>
>> Hello,
>>
>> I still have a large memory leak when using Trilinos. I am not sure 
>> where to start looking so I made an example code that produces my 
>> problem in hopes that someone can help me.
>>
>> But! my example is cool. I implemented Density Functional Theory in FiPy!
>>
>> My code is slow, but runs in parallel and is simple (relative to most 
>> DFT codes). The example I have attached is just a lithium and hydrogen 
>> atom. The electrostatic boundary conditions are goofy but work well 
>> enough for demonstration purposes. If you set use_trilinos to True, the 
>> code will slowly use more memory. If not, it will try to use Pysparse.
>>
>> Thanks,
>> -Michael Waters
>> ___
>> fipy mailing list
>> fipy@nist.gov
>> http://www.ctcms.nist.gov/fipy
>>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
> 
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

 ___
 fipy mailing list
 fipy@nist.gov
 http://www.ctcms.nist.gov/fipy
   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>>>
>>> 

Re: Memory Leaks with Trilinos

2016-03-29 Thread Keller, Trevor (Fed)
Hi Mike,

I can confirm the leak in this code/solver system using FiPy on a Linux box. 
Some details:
FiPy: 3.1-dev131-g77851ef
PyTrilinos: 12.1;(;Dev;)
PySparse: 1.1.1

I commented out the solver and preconditioner specifications, using the default 
behavior when launching the script with --pysparse or --trilinos flags instead. 
I also disabled garbage collection. Memory use was monitored using 
memory-profiler, launching the script using
OMP_NUM_THREADS=6 mprof run mod-fipy-dft.py --trilinos
or
OMP_NUM_THREADS=6 mprof run mod-fipy-dft.py --pysparse

For 100 steps, the pysparse solver plateaued around 8GB, with regular spikes in 
RAM consumption while solving the equations.
For 100 steps, the pytrilinos solver just keeps gobbling up  RAM -- it had 
about 37GB at the last step.
Using the solver (LinearBicgstabSolver) and preconditioner 
(JacobiPreconditioner) you specified for Trilinos, it's a little better, 
ramping up to just 24GB after 100 steps.


Digging around a little bit, there was a patch to PyTrilinos  (for v. 12.4) in 
October meant to address some memory leaks -- I'm cloning and re-building the 
project now. Can you report your PyTrilinos version, please, using
>>> from PyTrilinos import __version__ as PyTriVer
>>> print PyTriVer

Thank you,
Trevor



Trevor Keller, Ph.D.
Materials Science and Engineering Division
National Institute of Standards and Technology
100 Bureau Dr. MS 8550; Gaithersburg, MD 20899
Office: 223/A131 or (301) 975-2889



From: fipy-boun...@nist.gov  on behalf of Michael Waters 

Sent: Monday, March 28, 2016 2:10 PM
To: FIPY
Subject: Re: Memory Leaks with Trilinos

Thanks, If you think DFT in FiPy is strange, you should see how I make
isosurfaces in POV-Ray. :)

That said, If anyone is tinkering with my example and has questions, I
am glad to answer them!

Cheers,
-Mike Waters



On 03/28/2016 01:46 PM, Guyer, Jonathan E. Dr. (Fed) wrote:
> Mike, thanks for the example, and for the rather perverse application of FiPy!
>
> I'll fiddle with this and see what I get.
>
> - Jon
>
> On Mar 25, 2016, at 7:16 PM, Michael Waters  wrote:
>
>> Hello,
>>
>> I still have a large memory leak when using Trilinos. I am not sure where to 
>> start looking so I made an example code that produces my problem in hopes 
>> that someone can help me.
>>
>> But! my example is cool. I implemented Density Functional Theory in FiPy!
>>
>> My code is slow, but runs in parallel and is simple (relative to most DFT 
>> codes). The example I have attached is just a lithium and hydrogen atom. The 
>> electrostatic boundary conditions are goofy but work well enough for 
>> demonstration purposes. If you set use_trilinos to True, the code will 
>> slowly use more memory. If not, it will try to use Pysparse.
>>
>> Thanks,
>> -Michael Waters
>> ___
>> fipy mailing list
>> fipy@nist.gov
>> http://www.ctcms.nist.gov/fipy
>>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
> ___
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>[ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


Re: Error: Gmsh hasn't produced any cells!

2016-03-24 Thread Keller, Trevor (Fed)
Hi Daniel,


examples/diffusion/circle.py executes cleanly for me, with FiPy 
3.1-dev131-g77851ef and Gmsh 2.11.0.

This looks like an installation problem. The up-to-date FiPy codebase should 
work smoothly with more recent versions of Gmsh than 2.0. From your post, it's 
unclear whether you've recently pulled the develop branch from github, or 
manually applied the changes in that pull request to an older release. Please 
update both Gmsh and FiPy, then post the specific error messages it returns.


Thanks,

Trevor


From: fipy-boun...@nist.gov  on behalf of Daniel 
DeSantis 
Sent: Wednesday, March 23, 2016 4:09 PM
To: FIPY
Subject: Error: Gmsh hasn't produced any cells!

Hello,

I'm getting an error when trying to run the diffusion circle example. I keep 
getting the following error:

GmshException: Gmsh hasn't produced any cells! Check your Gmsh code.

Gmsh output:


I'm running Gmsh 2.0. Any other version gives me errors with the version number 
despite having fixed the error as described here: 
https://github.com/usnistgov/fipy/pull/442


Does anyone have any suggestions on how I can get Gmsh to produce the cells 
it's supposed to? Or perhaps someone could explain a different method of 
importing a gmsh file? I don't mind creating something in gmsh and then 
importing it separately if that's easier?

Thank you!

--
Daniel DeSantis


___
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]