It looked to me like steps and accuracy were the way to do it, but my runs finish in one step, so I was confused. When I change to accuracy = 10.0**-6, it takes 15 steps, but still no leak (note, the hiccup in RSS and in ELAPSED time is because I put my laptop to sleep for awhile, but VSIZE is rock-steady).
The fact that things never (or slowly) converge for you and Trevor, in addition to the leak, makes me wonder if Trilinos seriously broke something between 11.x and 12.x. Trevor's been struggling to build 12.4. I'll try to find time to do the same. In case it matters, I'm running on OS X. What's your system? - Jon On Mar 29, 2016, at 3:59 PM, Michael Waters <waters.mik...@gmail.com<mailto:waters.mik...@gmail.com>> wrote: When I did my testing and made those graphs, I ran Trilinos in serial. Syrupy didn't seem to track the other processes memory. I watched in real time as the parallel version ate all my ram though. To make the program run longer while not changing the memory: steps = 100 # increase this, (limits the number of self-consistent iterations) accuracy = 10.0**-5 # make this number smaller, (relative energy eigenvalue change for being considered converged ) initial_solver_iterations_per_step = 7 # reduce this to 1, (number of solver iterations per self-consistent iteration, to small and it's slow, to high and the solutions are not stable) I did those tests on a machine with 128 GB of ram so I wasn't expecting any swapping. Thanks, -mike On 3/29/16 3:38 PM, Guyer, Jonathan E. Dr. (Fed) wrote: I guess I spoke too soon. FWIW, I'm running Trilinos version: 11.10.2. On Mar 29, 2016, at 3:34 PM, Guyer, Jonathan E. Dr. (Fed) <jonathan.gu...@nist.gov<mailto:jonathan.gu...@nist.gov>> wrote: I'm not seeing a leak. The below is for trilinos. VSIZE grows to about 11 MiB and saturates and RSS saturates at around 5 MiB. VSIZE is more relevant for tracking leaks, as RSS is deeply tied to your system's swapping architecture and what else is running; either way, neither seems to be leaking, but this problem does use a lot of memory. What do I need to do to get it to run longer? On Mar 25, 2016, at 7:16 PM, Michael Waters <waters.mik...@gmail.com<mailto:waters.mik...@gmail.com>> wrote: Hello, I still have a large memory leak when using Trilinos. I am not sure where to start looking so I made an example code that produces my problem in hopes that someone can help me. But! my example is cool. I implemented Density Functional Theory in FiPy! My code is slow, but runs in parallel and is simple (relative to most DFT codes). The example I have attached is just a lithium and hydrogen atom. The electrostatic boundary conditions are goofy but work well enough for demonstration purposes. If you set use_trilinos to True, the code will slowly use more memory. If not, it will try to use Pysparse. Thanks, -Michael Waters <input.xyz><fipy-dft.py>_______________________________________________ fipy mailing list fipy@nist.gov<mailto:fipy@nist.gov> http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ] <attachment.png> _______________________________________________ fipy mailing list fipy@nist.gov<mailto:fipy@nist.gov> http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ] _______________________________________________ fipy mailing list fipy@nist.gov<mailto:fipy@nist.gov> http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ] _______________________________________________ fipy mailing list fipy@nist.gov<mailto:fipy@nist.gov> http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ] [cid:707090B0-3796-43B2-A403-760CE215C6DF]
_______________________________________________ fipy mailing list fipy@nist.gov http://www.ctcms.nist.gov/fipy [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]