Re: GC memory fragmentation

2021-11-03 Thread Imperatorn via Digitalmars-d-learn

On Wednesday, 3 November 2021 at 06:26:49 UTC, Heromyth wrote:

On Wednesday, 14 April 2021 at 12:47:22 UTC, Heromyth wrote:

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.




The Hunt Framework is also suffering from this. We are trying 
to make a simple example to illustrate it.


The bug has been fixed now[1]. It's because that a moudle named 
ArrayList wrapped the Array in std.container.array. We use a 
dynamic array instead.


We are not sure why this happens.

[1] 
https://github.com/huntlabs/hunt-extra/commit/903f3b4b529097013254342fdcfac6ba5543b401


Hmm, interesting


Re: GC memory fragmentation

2021-11-02 Thread Heromyth via Digitalmars-d-learn

On Wednesday, 14 April 2021 at 12:47:22 UTC, Heromyth wrote:

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.




The Hunt Framework is also suffering from this. We are trying 
to make a simple example to illustrate it.


The bug has been fixed now[1]. It's because that a moudle named 
ArrayList wrapped the Array in std.container.array. We use a 
dynamic array instead.


We are not sure why this happens.

[1] 
https://github.com/huntlabs/hunt-extra/commit/903f3b4b529097013254342fdcfac6ba5543b401


Re: GC memory fragmentation

2021-04-17 Thread kdevel via Digitalmars-d-learn

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:
we're using vibe-d (on Linux) for a long running REST API 
server [...]


[...] lot of small allocations (postgresql query, result 
processing and REST API json serialization).


I am wondering about your overall system design. Are there any 
"shared" data held in the REST API server which are not in the 
PostgreSQL database? I mean: If there are two potentially 
concurrent REST API calls is it necessary that these are handled 
both by the same operating system process?




Re: GC memory fragmentation

2021-04-16 Thread sarn via Digitalmars-d-learn

On Tuesday, 13 April 2021 at 12:30:13 UTC, tchaloupka wrote:
Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..


I've used bpftrace to do some of that stuff:
https://theartofmachinery.com/2019/04/26/bpftrace_d_gc.html


Re: GC memory fragmentation

2021-04-14 Thread Imperatorn via Digitalmars-d-learn

On Wednesday, 14 April 2021 at 12:47:22 UTC, Heromyth wrote:

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.




The Hunt Framework is also suffering from this. We are trying 
to make a simple example to illustrate it.


I think this is a rather serious problem for a language saying 
"GC is fine" 🌈.


Is there an issue for this?


Re: GC memory fragmentation

2021-04-14 Thread Heromyth via Digitalmars-d-learn

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.




The Hunt Framework is also suffering from this. We are trying to 
make a simple example to illustrate it.


Re: GC memory fragmentation

2021-04-14 Thread tchaloupka via Digitalmars-d-learn

On Tuesday, 13 April 2021 at 12:30:13 UTC, tchaloupka wrote:
I'm not so sure if pages of small objects (or large) that are 
not completely empty can be reused as a new bucket or only free 
pages can be reused.


Does anyone has some insight of this?

Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..



As now it's possible to inject own GC to runtime, I've tried to 
compile debug version of the GC implementation and use that to 
check some debugging printouts with regard to small allocations.


The code I've used to simulate the behavior (with 
`--DRT-gcopt=incPoolSize:0`):


```D
// fill 1st pool (minsize is 1MB so it's 256 pages, two small 
objects in each

ubyte[][512] pool1;
foreach (i; 0..pool1.length) pool1[i] = new ubyte[2042];

// no more space in pool1 so new one is allocated and also filled
ubyte[][512] pool2;
foreach (i; 0..pool2.length) pool2[i] = new ubyte[2042];

// free up first small object from first pool first page
pool1[0] = null;
GC.collect();

// allocate another small object
pool1[0] = new ubyte[2042];

```

And shortened result is:

```
// first pool allocations
GC::malloc(gcx = 0x83f6a0, size = 2044 bits = a, ti = TypeInfo_h)
Gcx::allocPage(bin = 13)
Gcx::allocPage(bin = 13)
Gcx::newPool(npages = 1, minPages = 256)
Gcx::allocPage(bin = 13)
  => p = 0x7f709e75c000

// second pool allocations
GC::malloc(gcx = 0x83f6a0, size = 2044 bits = a, ti = TypeInfo_h)
Gcx::allocPage(bin = 13)
Gcx.fullcollect()
preparing mark.
scan stacks.
scan roots[]
scan ranges[]
0x51e4e0 .. 0x53d580
free'ing
free'd 0 bytes, 0 pages from 1 pools
recovered small pages = 0
Gcx::allocPage(bin = 13)
Gcx::newPool(npages = 1, minPages = 256)
Gcx::allocPage(bin = 13)
  => p = 0x7f709e65c000

// collection of first item of first pool first bin
GC.fullCollect()
Gcx.fullcollect()
preparing mark.
scan stacks.
scan roots[]
scan ranges[]
0x51e4e0 .. 0x53d580
free'ing
collecting 0x7f709e75c000
free'd 0 bytes, 0 pages from 2 pools
recovered small pages = 0

// and allocation to test where it'll allocate
GC::malloc(gcx = 0x83f6a0, size = 2044 bits = a, ti = TypeInfo_h)
  recover pool => 0x83d1a0
  => p = 0x7f709e75c000
```

So if I'm not mistaken, bins in pools are reused when there is a 
free space (ast last allocation reuses address from first item in 
first bin in first pool).


I've also tested big allocations with same result.

But if this is so, I don't understand why in our case there is 
95% free space in the GC and it still grows from time to time.


At least rate of grow is minimized with 
`--DRT-gcopt=incPoolSize:0`


Would have to log and analyze the behavior more with a custom GC 
build..k


Re: GC memory fragmentation

2021-04-13 Thread Tobias Pankrath via Digitalmars-d-learn

On Tuesday, 13 April 2021 at 12:30:13 UTC, tchaloupka wrote:


Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..


You could try to get the stack traces of the allocating calls via 
eBPF. Maybe that leads to some new insights.






Re: GC memory fragmentation

2021-04-13 Thread tchaloupka via Digitalmars-d-learn

On Monday, 12 April 2021 at 07:03:02 UTC, Sebastiaan Koppe wrote:


We have similar problems, we see memory usage alternate between 
plateauing and then slowly growing. Until it hits the 
configured maximum memory for that job and the orchestrator 
kills it (we run multiple instances and have good failover).


I have reduced the problem by refactoring some of our gc usage, 
but the problem still persists.


On side-note, it would also be good if the GC can be aware of 
the max memory it is allotted so that it knows it needs to do 
more aggressive collections when nearing it.


I knew this must be a more common problem :)

What I've found in the meantime:

* nice writeup of how GC actually works by Vladimir Panteleev - 
https://thecybershadow.net/d/Memory_Management_in_the_D_Programming_Language.pdf
  * described tool (https://github.com/CyberShadow/Diamond) would 
be very helpfull, but I assume it's for D1 and based on some old 
druntime fork :(
* we've implemented log rotation using `std.zlib` (by just 
`foreach (chunk; fin.byChunk(4096).map!(x => c.compress(x))) 
fout.rawWrite(chunk);`)
  * oh boy, don't use `std.zlib.Compress` that way, it allocates 
each chunk and for a large files it creates large GC memory peaks 
that sometimes doesn't go down

  * rewritten using direct `etc.c.zlib` completely out of GC
* currently testing with `--DRT-gcopt=incPoolSize:0` as otherwise 
allocated page size multiplies with number of allocated pools * 
3MB by default
* `profile-gc` is not much helpfull in this case as it only 
prints total allocated memory for each allocation on the 
application exit and as it's a long running service using many 
various libraries it's just hundreds of lines :)
  * I've considered to fork the process periodically, terminate 
it and rename the created profile statistics to at least see the 
differences between the states, but still not sure if it would 
help much
* as I understand the GC it uses different algorithm for small 
allocations and for large objects

  * small (`<=2048`)
* it categorizes objects to fixed set of used sizes and for 
each uses whole memory page as bucket with free list from which 
it reserves memory on request
* when the bucket is full, new page is allocated and 
allocations are provided from that

  * big - similar, but it allocates N pages as a pool

So If I understand it correctly when for example vibe-d 
initializes new fiber on some request, it's handled and fiber can 
be discarded it can easily lead to a scenario when fiber itself 
is allocated in one page, it's filled up during the request 
processing so new page is allocated and when cleaning, bucket 
with fiber cannot be cleaned up as it's added to a `TaskFiber` 
pool (with a fixed size). This way fiber's bucket would never be 
freed and easily never be used anymore during the application 
lifetime.


I'm not so sure if pages of small objects (or large) that are not 
completely empty can be reused as a new bucket or only free pages 
can be reused.


Does anyone has some insight of this?

Some kind of GC memory dump and analyzer tool as mentioned 
`Diamond` would be of tremendous help to diagnose this..


Re: GC memory fragmentation

2021-04-12 Thread Per Nordlöw via Digitalmars-d-learn

On Monday, 12 April 2021 at 20:50:49 UTC, Per Nordlöw wrote:

more aggressive collections when nearing it.


What is a more aggressive collection compared to a normal 
collection? Unfortunately we still have no move support in D's GC 
because of its impreciseness.


Re: GC memory fragmentation

2021-04-12 Thread Per Nordlöw via Digitalmars-d-learn

On Monday, 12 April 2021 at 20:50:49 UTC, Per Nordlöw wrote:

I'm surprised there is no such functionality available.
It doesn't sound to me like it's that difficult to implement.


Afaict, we're looking for a way to call `GC.collect()` when 
`GC.Status.usedSize` [1] reaches a certain threshold.


I wonder how other GC-backed languages handle this.

[1] https://dlang.org/phobos/core_memory.html#.GC.Stats.usedSize


Re: GC memory fragmentation

2021-04-12 Thread Per Nordlöw via Digitalmars-d-learn

On Monday, 12 April 2021 at 07:03:02 UTC, Sebastiaan Koppe wrote:
On side-note, it would also be good if the GC can be aware of 
the max memory it is allotted so that it knows it needs to do 
more aggressive collections when nearing it.


I'm surprised there is no such functionality available.
It doesn't sound to me like it's that difficult to implement.


Re: GC memory fragmentation

2021-04-12 Thread Imperatorn via Digitalmars-d-learn

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.


[...]


Looks like the GC needs some love


Re: GC memory fragmentation

2021-04-12 Thread Sebastiaan Koppe via Digitalmars-d-learn

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.


[...]

But this is very bad for performance so we've prolonged the 
interval for example to 30 seconds and now memory still goes up 
(not that dramatically but still).


[...]

So it wastes a lot of free space which it can't return back to 
OS for some reason.


Can these numbers be caused by memory fragmentation? There is 
probably a lot of small allocations (postgresql query, result 
processing and REST API json serialization).


We have similar problems, we see memory usage alternate between 
plateauing and then slowly growing. Until it hits the configured 
maximum memory for that job and the orchestrator kills it (we run 
multiple instances and have good failover).


I have reduced the problem by refactoring some of our gc usage, 
but the problem still persists.


On side-note, it would also be good if the GC can be aware of the 
max memory it is allotted so that it knows it needs to do more 
aggressive collections when nearing it.


Re: GC memory fragmentation

2021-04-11 Thread frame via Digitalmars-d-learn

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.


Do you have a manual GC.free() in your code, maybe with a larger 
array?




Only explanation that makes some sense is that in some 
operation there is required memory allocations that can't be 
fulfilled with current memory pool (ie due to the memory 
fragmentation in it) and then it allocates some data in new 
memory segment that can't be returned afterwards as it still 
holds the 'live' data. But that should be freed too at some 
point and GC should minimize (if another request doesn'cause 
allocation in the 'wrong' page again).


If so, setting minPoolSize:1 could help you to control the memory 
usage, and if this memory is kept, you could inspect whats inside 
the particular pool item (at least for GC allocated stuff).





Re: GC memory fragmentation

2021-04-11 Thread ryuukk_ via Digitalmars-d-learn

I should have added

https://dub.pm/package-format-json#build-types

profileGC as build option in your dub file

That will generate a profile_gc file once the program exit, a 
table that lists all the allocations


Re: GC memory fragmentation

2021-04-11 Thread ryuukk_ via Digitalmars-d-learn

On Sunday, 11 April 2021 at 13:50:12 UTC, tchaloupka wrote:

On Sunday, 11 April 2021 at 12:20:39 UTC, Nathan S. wrote:


One thing that comes to mind: is your application compiled as 
32-bit? The garbage collector is much more likely to leak 
memory with a 32-bit address space since it much more likely 
for a random int to appear to be a pointer to the interior of 
a block of GC-allocated memory.


Nope it's 64bit build.
I've also tried to switch to precise GC with same result.

Tom


Try to tweak the GC, first could try with the precise one?

Also try to enable GC profiling with profile:1 to know what 
exactly eats memory, i suspect you have a memory leak somewhere, 
maybe in one of the native libs you are using for your SQL 
queries?


---
extern(C) __gshared string[] rt_options = [
  "gcopt=gc:precise heapSizeFactor=1.2 profile:1"
];
---


Re: GC memory fragmentation

2021-04-11 Thread tchaloupka via Digitalmars-d-learn

On Sunday, 11 April 2021 at 12:20:39 UTC, Nathan S. wrote:


One thing that comes to mind: is your application compiled as 
32-bit? The garbage collector is much more likely to leak 
memory with a 32-bit address space since it much more likely 
for a random int to appear to be a pointer to the interior of a 
block of GC-allocated memory.


Nope it's 64bit build.
I've also tried to switch to precise GC with same result.

Tom


Re: GC memory fragmentation

2021-04-11 Thread Nathan S. via Digitalmars-d-learn

On Sunday, 11 April 2021 at 09:10:22 UTC, tchaloupka wrote:

Hi,
we're using vibe-d (on Linux) for a long running REST API 
server and have problem with constantly growing memory until 
system kills it with OOM killer.


One thing that comes to mind: is your application compiled as 
32-bit? The garbage collector is much more likely to leak memory 
with a 32-bit address space since it much more likely for a 
random int to appear to be a pointer to the interior of a block 
of GC-allocated memory.


GC memory fragmentation

2021-04-11 Thread tchaloupka via Digitalmars-d-learn

Hi,
we're using vibe-d (on Linux) for a long running REST API server 
and have problem with constantly growing memory until system 
kills it with OOM killer.


First guess was some memory leak so we've added periodic call to:

```D
GC.collect();
GC.minimize();
malloc_trim(0);
```

And when called very often (ie 500ms) service memory stays stable 
and doesn't grow, so it doesn't look as a memory leak (or at 
least not that fast to explain that service can go from 90MB to 
900MB in 2 days).


But this is very bad for performance so we've prolonged the 
interval for example to 30 seconds and now memory still goes up 
(not that dramatically but still).


Stats of the GC when it growed up between 30s are these (logged 
after `GC.collect()` and `GC.minimize()`:


```
GC.stats: usedSize=9994864, freeSize=92765584, total=102760448
GC.stats: usedSize=11571456, freeSize=251621120, total=263192576
```

Before the grow it has 88MB free space from 98MB total allocated.
After 30s it has 239MB free from 251MB allocated.

So it wastes a lot of free space which it can't return back to OS 
for some reason.


Can these numbers be caused by memory fragmentation? There is 
probably a lot of small allocations (postgresql query, result 
processing and REST API json serialization).


Only explanation that makes some sense is that in some operation 
there is required memory allocations that can't be fulfilled with 
current memory pool (ie due to the memory fragmentation in it) 
and then it allocates some data in new memory segment that can't 
be returned afterwards as it still holds the 'live' data. But 
that should be freed too at some point and GC should minimize (if 
another request doesn'cause allocation in the 'wrong' page again).


But still, the amount of used vs free memory seems wrong, its a 
whooping 95% of free space that can't be minimized :-o. I have a 
problem imagining some fragmentation in it.


Are there any tools that can help diagnosing this more?

Also note that `malloc_trim` is not called from the GC itself and 
as internally used malloc handles it's own memory pool it has 
it's own quirks with returning unused memory back to the OS (it 
does it only on `free()` in some cases).
See for example: 
http://notes.secretsauce.net/notes/2016/04/08_glibc-malloc-inefficiency.html


Behavior of malloc can be controlled with `mallopt` with:

```
M_TRIM_THRESHOLD
When  the  amount  of contiguous free memory at the top of 
the heap grows sufficiently large, free(3) employs sbrk(2) to 
release this memory back to the system.  (This can be useful in 
programs that continue to execute
for a long period after freeing a significant amount of 
memory.)  The M_TRIM_THRESHOLD parameter specifies the minimum 
size (in bytes) that this block of memory must reach before 
sbrk(2) is used to trim the heap.


The default value for this parameter is 128*1024.
```

But the default is 128kB free heap memory block for trim to 
activate but when `malloc_trim` is called manually, a much larger 
block of memory is often returned. Thats puzzling too :)