Am Wed, 30 Apr 2014 08:33:25 -0700
schrieb Andrei Alexandrescu <seewebsiteforem...@erdani.org>:

> I'm mulling over a couple of design considerations for allocators, and 
> was thinking of the following restriction:
> 
> 1. No out-of-bounds tricks and no pointer arithmetic. Consider:
> 
> int[] a = new int[1000];
> int* p = a[500 .. $].ptr;
> a = a[250 .. 750];
> 
> Subsequently the GC should be within its rights to deallocate any memory 
> within the first and last 250 integers allocated, even though in theory 
> the user may get to them by using pointer arithmetic.

I see that you are trying to account for allocator designs
that could reuse these memory fragments.
If this is for @safe, then maybe some memory could be released,
but you'd have to statically verify that internal pointers
don't make it into unsafe code where I wonder if any memory
would be released if I wrote:

        size_t length = 100;
        int* p = (new int[](length)).ptr;
        GC.collect();
        p[length-1] = 42;

So it is difficult to give a good answer. I'd say no until it
is clear how it would work outside of @safe.

> 6. The point above brings to mind more radical possibilities, such as 
> making all arrays reference-counted and allowing compulsive deallocation 
> when the reference counter goes down to zero. That would rule out things 
> like escaping pointers to data inside arrays, which is quite extreme.

Would that affect all arrays, only arrays containing structs
or only affect arrays containing structs with dtors?

        printf("hello\n".ptr);

should still work after all.

-- 
Marco

Reply via email to