I don't think you understand the purpose of the cache.  It does not store 
arrays, it stores BlkInfo's.  This is information from the GC about where a 
memory block starts, its length, and what flags it was created with.  Appending 
in-place to an array does not change the cached blockinfo.

First, setting length to be smaller already does what you propose, except it 
does not look anything up in the cache.

Second, slicing is a way way more common operation than setting the length, and 
you would be changing this common operation that does 2 reads and 2 writes to a 
function call with a cache search.  Any gain you think you get from removing 
the block from the cache would be overshadowed by the bloating of the slice 
operation itself.

Maybe I'm not understanding your request properly.

-Steve



----- Original Message ----
> From: Andrei Alexandrescu <[email protected]>
> 
> Right now array append is reasonably fast due to Steve's great work. 
> Basically 
> the append operation caches the capacities of the last arrays that were 
> appended 
> to, which makes the capacity query fast most of the time.

I was thinking 
> of a possible change. How about having the slice operator arr[a .. b] remove 
> the 
> array from the cache? That way we can handle arr.length = n 
> differently:

(a) if n > arr.length, resize appropriately
(b) if n 
> < arr.length AND the array is in the cache, keep the array in the 
> cache.

The change is at (b). If the array is in the cache and its length 
> is made smaller, then we can be sure the capacity will stay correct after the 
> resizing. This is because we know for sure there will be no stomping - if 
> stomping were possible, the array would not be in the cache.

Could this 
> work? And if it does, would it be a good change to make?


Andrei


      
_______________________________________________
phobos mailing list
[email protected]
http://lists.puremagic.com/mailman/listinfo/phobos

Reply via email to