Adrian -

You're running examples.cahnHilliard.mesh2DCoupled? There are a couple of 
issues with the example as posted:

- There is a sign error in this example. It was fixed over a year ago, but 
never merged to master or released as a tarball. You can run our develop 
branch, or manually apply the changes at 
https://github.com/usnistgov/fipy/commit/8a1f81da5a3af6774c7803aed98aa38398904492

- The system domain in that example is very small. Parallelizing it will result 
in lots of overlapping cells, so won't scale well.

- If you make the system bigger, you'll quickly find that the default Trilinos 
solver/preconditioner for this problem (GMRES with 
dynamic-domain-decomposition) doesn't converge. No errors get generated, but 
the solver runs to maximum iterations and the solution doesn't change. If you 
instantiate a GMRES solver with a Jacobi preconditioner, this problem should be 
resolved.

Hopefully this combination of changes will show some parallel speedup.

I'm presently (haphazardly) doing some solver benchmarking and am working my 
way toward both parallel and coupled-CH benchmarking, which I'll then 
incorporate in the documentation and/or in the default settings. It's taking me 
awhile to get there, though.

As far as anaconda, when did you try the Mac OS X installation? Those 
instructions were updated about three weeks ago and work for me (Mac is my 
primary machine) and worked for a couple of people I was at a workshop with. If 
you've done this since then, I'd appreciate knowing what doesn't work.

- Jon

> On Feb 6, 2017, at 2:09 PM, Adrian Jacobo <ajac...@mail.rockefeller.edu> 
> wrote:
> 
> Hi,
> 
>  I'm trying to speed up the Cahn-Hilliard example by running it in 
> parallel but the running time is always the same regardless of how many 
> processors I use. I'm using FiPy in Linux installed from Anaconda (I 
> tried the same installation in OsX but trillinos doesn't work) following 
> the instructions on the website. I ran the parallel.py example and the 
> output seems to indicate that trilinos is working and correctly 
> communicating with mpi.  I'm running a 500x500 grid but I tried changing 
> the size and I don't see any speedup by running it in parallel, as if 
> each thread was integrating the whole grid. Any ideas?
> 
> Best,
> Adrian.
> _______________________________________________
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


_______________________________________________
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to