On 3/24/2016 11:25, Temtaime wrote:
Hi !
I have an app with large amount of math, so there's lot of arrays with
floats.
I found that sometimes my app starts to eat memory and then it crash.
The problem i think is false pointers. For example i have a struct with
pointers and static array of floats. GC marks entire struct as
containing pointers. And when some data in the array starts to point to
valid memory region, gc won't release that memory.
Also i build my app as 64 bit. I found that gc allocates memory using
https://github.com/D-Programming-Language/druntime/blob/1f957372e5dadb92ab1d621d68232dbf8a2dbccf/src/gc/os.d#L64
And all the addresses if i print them are below 4G. So only low 32 bits
of 64 bit address are used.
One can give preferred address to VirtualAlloc and make it > 4G.
I changed implementation of os_mem_map to something like:
ulong addr = 1 << 40;
while(true) {
if(auto p = VirtualAlloc(cast(void *)addr, nbytes, MEM_RESERVE |
MEM_COMMIT,
PAGE_READWRITE)) return p;
addr += nbytes;
}
And my problem with growing memory had gone.
I think we should alter implementation of os_mem_map to allocate using
addresses > 4G, so high bits won't be zeros and it'll help with false
pointer.
I think we should use some random function here. Maybe rdtsc ? Or maybe
there's other opinions ?
Thanks.
That is an interesting hack, but I would think it is rather brittle.
Their are two long-term solutions that will get you want you want. The
first is to avoid the GC altogether and manually allocate everything.
The second is a Precise GC. And while it's impossible to be 100% precise
GC given that we have unions and C-compatibility, for your use case I
imagine the precision would give you most of what you need.
Interestingly enough, there is a GSoC candidate this year that is
proposing a project that would make the D GC precise.