Experience has shown that using allocators can drastically improve the execution time of D programs. It has also shown that the biggest issue with allocators is that unless you are very careful, the GC will start freeing your live memory.

I think that we need to define the behavior of core.memory.GC.addRange more precisely. Consider the following pseudo-code:

auto a = allocate(aSize);
GC.addRange(a, aSize);
auto a = reallocate(a, aSize * 2);
GC.addRange(a, aSize * 2);

What does this do? Well, if the proxy pointer inside of the GC is null, it creates two entries. If it's not null and the default GC implementation is being used, it results in one entry being present. This single entry will only contain the smaller size, leaving the GC free to collect memory pointed to by the second half of your array.

How about removing? Again, it depends. If the default GC is being used there can only be one entry per pointer, so removeRange does exactly what you expect. If the proxy pointer is null, gc_removeRange falls back to an implementation that scans through its ranges array and overwrites the first entry containing a matching pointer with the last entry in the array, and then decrementing the array length. This reordering means if you have multiple entries for various pointers, you don't know which one is being removed when you call removeRange. This is fine if you like to practice luck-oriented programming.

What about calling removeRange and then addRange immediately? This will only work in single-threaded code. In a multi-threaded program the GC can run between the calls to removeRange and addRange.

I think that the only sane way to solve this is to define in the specs for core.memory that GC.addRange will only ever store one entry per pointer, and that the length will be the value of "sz" from the most recent call to addRange.

Thoughts?

Reply via email to