On 11.11.2014 08:36, Iain Buclaw wrote:
Hi,

I find myself wondering what do people use to inspect the GC on a
running process?

Last night, I had a look at a couple of vibe.d servers that had been
running for a little over 100 days.  But the same code, but one used
less (or not at all).

Given that the main app logic is rather simple, and things that may be
otherwise held in memory (files) are offloaded onto a Redis server.
I've have thought that it's consumption would have stayed pretty much
stable.  But to my surprise, I found that the server that is under more
(but not heavy) use is consuming a whooping 200MB.

Initially I had tried to see if this could be shrunk in some way or
form.  Attached gdb to the process, called gc_minimize(), gc_collect() -
but it didn't have any effect.

The GC allocates pools of memory in increasing size. It starts with 1 MB, then adds 3MB for every new pool. (Numbers might be slightly different depending on the druntime version). These pools are then used to service any allocation request.

gc_minimize can only return memory to the system if all the allocation in a pool are collected, which is very unlikely.


When I noticed gc_stats with an informative *experimental* warning, I
thought "lets just run it anyway and see what happens"... SEGV.  Wonderful.

I suspect calling gc_stats from the debugger is "experimental" because it returns a struct. With a bit of casting, you might by able to call "_gc.getStats( *cast(GCStats*)some_mem_adr );" instead.


As I'm probably going to have to wait 100 days for the apparent leak
(I'm sure it is a leak though) to show itself again, does anyone have
any nice suggestions whether or not to confirm this memory is just being
held and never freed by the GC?

Iain.


Reply via email to