Am 08.02.2012, 14:30 Uhr, schrieb Manfred Nowak <svv1...@hotmail.com>:

Marco Leise wrote:

In D we allocate memory through the GC
[...]
Let's assume, we have a program that allocates some buffers in
advance, that it may not use fully.
[...]
there is really no alternative to calloc.

1)
calloc implements a strategie for allocating memory. If this strategy
is usefull for parts of a program, then the GC should be informed
from the code and be capable to adapt its behavior to the requested
strategy.

That sounds a bit vague. If I understand you correctly you would implement this as a hint flag, like

        GC.allocHint = AllocationHint.lazyZero;
        auto arr = new ubyte[…];

This seems necessary because there is not only the null pattern to
initialize allocated memory.

2)
According to your OP the request for allocating memory can be
disjoint partitioned into three:
a) memory that is guaranteed to be used
b) memory that is guaranteed not to be used and
c) memory that might be used.

For the case that a + b exceeds available memory, should the coder
predeclare a strategy, i.e. should the GC signal out of memory on
request of the memory or should the GC wait until that sum reaches
some deadline?

-manfred

Here I can't follow you. The request to allocate memory contains memory regions that are guaranteed not to be used? Why would I request them then? Also, what is your definition of available memory? Available RAM, RAM + swap or available virtual memory going beyond what the operating system can commit and resulting in random killing of applications (http://linux-mm.org/OOM_Killer) I don't see where the GC enters the picture, since all this 'may use memory, but don't commit on it' is handled solely by the OS. I could only imagine a function in the GC that always allocates a new chunnk of memory and does that through calloc - or the hammer method - all allocations use calloc and some of the manual memset(..., 0, ...) is removed.

-- Marco

Reply via email to