Hi Ian,

Could you possible post your code or a version of the code that
demonstrates the problem? Also, do you have the same issue with different
solver suites?

Cheers,

Daniel



On Fri, Sep 30, 2016 at 12:41 PM, Campbell, Ian <i.campbel...@imperial.ac.uk
> wrote:

> Hi All,
>
>
>
> We are sweeping six PDEs in a time-stepping loop. We’ve noticed that as
> CPU time progresses, the duration of each time-step increases, although the
> sweep count remains constant. This is illustrated in the Excel file of data
> logged from the simulation, which is available at the first hyperlink below.
>
>
>
> Hence, we suspected a memory leak may be occurring. After conducting
> memory-focused line-profiling with the vprof tool, we observed a linear
> increase in total memory consumption at a rate of approximately 3 MB per
> timestep loop. This is evident in the graph at the second link below, which
> illustrates the memory increase over three seconds of simulation.
>
>
>
> As a further step, we used Pympler to investigate the source of RAM
> consumption increase for each timestep. The table below is an output from
> Pympler’s SummaryTracker().print_diff(), which describe the additional
> objects created within every time-step. Clearly, there are ~3.2 MB of
> additional data being generated with every loop – this correlates perfectly
> with the total rate of increase of memory consumption reported by vprof.
> Although we are not yet sure, we suspect that the increasing time spent per
> loop is the result of this apparent memory leak.
>
>
>
> We suspect this is the result of the calls to .sweep, since we are not
> explicitly creating these objects. Can the origin of these objects be
> traced, and furthermore, is there a way to avoid re-creating them and
> consuming more memory with every loop?  Without some method of unloading or
> preventing this object build-up, it isn’t feasible to run our simulation
> for long durations.
>
> dict
>
> 2684
>
> 927.95
>
> KB
>
> type
>
> 1716
>
> 757.45
>
> KB
>
> tuple
>
> 9504
>
> 351.31
>
> KB
>
> list
>
> 4781
>
> 227.09
>
> KB
>
> str
>
> 2582
>
> 210.7
>
> KB
>
> numpy.ndarray
>
> 396
>
> 146.78
>
> KB
>
> cell
>
> 3916
>
> 107.08
>
> KB
>
> property
>
> 2288
>
> 98.31
>
> KB
>
> weakref
>
> 2287
>
> 98.27
>
> KB
>
> function (getName)
>
> 1144
>
> 67.03
>
> KB
>
> function (getRank)
>
> 1144
>
> 67.03
>
> KB
>
> function (_calcValue_)
>
> 1144
>
> 67.03
>
> KB
>
> function (__init__)
>
> 1144
>
> 67.03
>
> KB
>
> function (_getRepresentation)
>
> 1012
>
> 59.3
>
> KB
>
> function (__setitem__)
>
> 572
>
> 33.52
>
> KB
>
> SUM
>
> 3285.88
>
> KB
>
>
>
>
>
> https://imperialcollegelondon.box.com/s/zp9jj67du3mxdcfgbc4el8cqpxwnv0y4
>
>
>
> https://imperialcollegelondon.box.com/s/ict9tnswqk9z57ovx8r3ll5po5ccrib9
>
>
>
> With best regards,
>
>
>
> -          Ian & Krishna
>
>
>
> P.S. Daniel, thank you very much for the excellent example solution you
> provided in response to our question on obtaining the sharp discontinuity.
>
>
>
> Ian Campbell | PhD Candidate
>
> Electrochemical Science & Engineering Group
>
> Imperial College London, SW7 2AZ, United Kingdom
>
>
>
> _______________________________________________
> fipy mailing list
> fipy@nist.gov
> http://www.ctcms.nist.gov/fipy
>   [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]
>
>


-- 
Daniel Wheeler
_______________________________________________
fipy mailing list
fipy@nist.gov
http://www.ctcms.nist.gov/fipy
  [ NIST internal ONLY: https://email.nist.gov/mailman/listinfo/fipy ]

Reply via email to