To change the solver/preconditioner, I recommend

>>> from fipy import LinearGMRESSolver, JacobiPreconditioner

and then when you call solve() or sweep(), write

...     eq.solve(phi, dt=dt, 
solver=LinearGMRESSolver(precon=JacobiPreconditioner()))



I see the issue you report with the script as written. Changing to GMRES with 
Jacobi preconditioning gets speedup on my little 2-core laptop. I'm in the 
midst of profiling and rewriting our CH examples and will take this into 
account in the settings I ultimately choose.


We wrote the split solution originally to demonstrate that it could be done. 
FiPy did not used to have the ability to solve coupled sets of equations, but 
we did have special-case "higher order diffusion" operators. Once we 
implemented coupled equations, it became clear this was a much more general 
approach. Higher order diffusion (DiffusionTerm with multiple coefficients) is 
still available, but it doesn't generalize well, there are some unclear issues 
with boundary conditions, and they cannot be coupled with other equations. 
Whether you split and couple or solve with higher order terms, the matrix is 
more poorly conditioned than the typical 2nd order equations. I don't know 
whether one converges better than the other, in general.

> On Feb 6, 2017, at 4:01 PM, Adrian Jacobo <ajac...@mail.rockefeller.edu> 
> wrote:
> 
> Jon,
> 
>  Thanks for your reply, I was running the Cahn Hilliard with a 500x500 
> mesh, so it is not an overlapping problem.
>  I was using this example 
> http://www.ctcms.nist.gov/fipy/examples/cahnHilliard/generated/examples.cahnHilliard.mesh2D.html
>  
> which I now see it's been updated and the equations are solved by 
> factoring them into two equations, is there any reason for doing this 
> factorization?
>  I'll try running the new script with the corrected sign and get back 
> at you, how do I change the Trilinos solver/preconditioner?
>  I'll send now a separate email regarding the anaconda installation.
> 
> Best,
> Adrian.
> 
> 
>> Adrian -
>> 
>> You're running examples.cahnHilliard.mesh2DCoupled? There are a couple of 
>> issues with the example as posted:
>> 
>> - There is a sign error in this example. It was fixed over a year ago, but 
>> never merged to master or released as a tarball. You can run our develop 
>> branch, or manually apply the changes at 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_usnistgov_fipy_commit_8a1f81da5a3af6774c7803aed98aa38398904492&d=DwICAg&c=JeTkUgVztGMmhKYjxsy2rfoWYibK1YmxXez1G3oNStg&r=L6ffpm32hXcrNVgPrCUpE1ETAP0ihkpRJw_LGmJrntY&m=b5PUXJNUOd4hCWOREzB7y5TD3kgcq4MbScDfT6RJCqM&s=m-lETtFeGzLLIvB2bk9ZYJYqPBX5LlUMbcEvFKvYxmE&e=
>> 
>> - The system domain in that example is very small. Parallelizing it will 
>> result in lots of overlapping cells, so won't scale well.
>> 
>> - If you make the system bigger, you'll quickly find that the default 
>> Trilinos solver/preconditioner for this problem (GMRES with 
>> dynamic-domain-decomposition) doesn't converge. No errors get generated, but 
>> the solver runs to maximum iterations and the solution doesn't change. If 
>> you instantiate a GMRES solver with a Jacobi preconditioner, this problem 
>> should be resolved.
>> 
>> Hopefully this combination of changes will show some parallel speedup.
>> 
>> I'm presently (haphazardly) doing some solver benchmarking and am working my 
>> way toward both parallel and coupled-CH benchmarking, which I'll then 
>> incorporate in the documentation and/or in the default settings. It's taking 
>> me awhile to get there, though.
>> 
>> As far as anaconda, when did you try the Mac OS X installation? Those 
>> instructions were updated about three weeks ago and work for me (Mac is my 
>> primary machine) and worked for a couple of people I was at a workshop with. 
>> If you've done this since then, I'd appreciate knowing what doesn't work.
>> 
>> - Jon
>> 
>>> On Feb 6, 2017, at 2:09 PM, Adrian Jacobo <ajac...@mail.rockefeller.edu> 
>>> wrote:
>>> 
>>> Hi,
>>> 
>>>  I'm trying to speed up the Cahn-Hilliard example by running it in
>>> parallel but the running time is always the same regardless of how many
>>> processors I use. I'm using FiPy in Linux installed from Anaconda (I
>>> tried the same installation in OsX but trillinos doesn't work) following
>>> the instructions on the website. I ran the parallel.py example and the
>>> output seems to indicate that trilinos is working and correctly
>>> communicating with mpi.  I'm running a 500x500 grid but I tried changing
>>> the size and I don't see any speedup by running it in parallel, as if
>>> each thread was integrating the whole grid. Any ideas?
>>> 
>>> Best,
>>> Adrian.
>>> _______________________________________________
>>> fipy mailing list
>>> fipy@nist.gov
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ctcms.nist.gov_fipy&d=DwICAg&c=JeTkUgVztGMmhKYjxsy2rfoWYibK1YmxXez1G3oNStg&r=L6ffpm32hXcrNVgPrCUpE1ETAP0ihkpRJw_LGmJrntY&m=b5PUXJNUOd4hCWOREzB7y5TD3kgcq4MbScDfT6RJCqM&s=jP3Q6HCXhYWdR9YAZvbrXYtS1hDoFDnRz34x2w1UXKA&e=
>>>  [ NIST internal ONLY: 
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__email.nist.gov_mailman_listinfo_fipy&d=DwICAg&c=JeTkUgVztGMmhKYjxsy2rfoWYibK1YmxXez1G3oNStg&r=L6ffpm32hXcrNVgPrCUpE1ETAP0ihkpRJw_LGmJrntY&m=b5PUXJNUOd4hCWOREzB7y5TD3kgcq4MbScDfT6RJCqM&s=3iby1vNjQ2agqel1YCw8Z2uxSqc7VLPEGArJoxIZYUk&e=
>>>   ]
>> 
>> _______________________________________________
>> fipy mailing list
>> fipy@nist.gov
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ctcms.nist.gov_fipy&d=DwICAg&c=JeTkUgVztGMmhKYjxsy2rfoWYibK1YmxXez1G3oNStg&r=L6ffpm32hXcrNVgPrCUpE1ETAP0ihkpRJw_LGmJrntY&m=b5PUXJNUOd4hCWOREzB7y5TD3kgcq4MbScDfT6RJCqM&s=jP3Q6HCXhYWdR9YAZvbrXYtS1hDoFDnRz34x2w1UXKA&e=
>>   [ NIST internal ONLY: 
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__email.nist.gov_mailman_listinfo_fipy&d=DwICAg&c=JeTkUgVztGMmhKYjxsy2rfoWYibK1YmxXez1G3oNStg&r=L6ffpm32hXcrNVgPrCUpE1ETAP0ihkpRJw_LGmJrntY&m=b5PUXJNUOd4hCWOREzB7y5TD3kgcq4MbScDfT6RJCqM&s=3iby1vNjQ2agqel1YCw8Z2uxSqc7VLPEGArJoxIZYUk&e=
>>   ]
> 
> 
> _______________________________________________
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]


_______________________________________________
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to