dsimcha:

> I personally find that I almost never use
> static arrays, either on the stack or inside heap-allocated objects because 
> the
> fact that their size is fixed at compile time is just too restrictive.  About 
> my
> only use for them is to store compile-time constants in the static data 
> segment.

Fixed sized arrays can be quite useful, so you can learn to use them more 
often. Their max size is fixed, but you can use a smaller part of them, so they 
can be flexible enough. Stack allocations are usually quite faster.

For example in my dlibs you can find an edit distance. This function needs an 
temporary internal buffer. If the input strings are small enough, it uses a 
fixed stack allocated array, otherwise it calls another variant of the same 
function that allocates the buffer from the heap. This keeps this function 
flexible, and (I have timed it) makes it quite more faster for the (common) 
case of small input strings.

Here you can see another example of mine:
http://www.fantascienza.net/leonardo/js/wirun2.c
It's a Wireworld simulator, to run it needs a matrix. Indexing in a statically 
allocated matrix can be faster because you know the sizes at compile time. So a 
good compiler can find an item in the matrix avoiding a costly multiplication, 
using a cheaper algorithm made of few sums and shifts. Here the matrix sizes 
are a power of two, so to find an item in the matrix the compiler just uses a 
shift and a sum, this is quite faster. If you replace the static array in this 
wirun program (that can be translated to very similar C-like D code to be 
compiled with LDC) you will see this program to slow down. I have timed it. If 
you use a dynamic matrix you have to pay more (there are ways to simulate just 
a shift in a dynamically allocated matrix too, I have done that too, but that's 
less intuitive and commonly done).


> Why not dynamic arrays?  Wouldn't it make more sense to do:
> 
> class MemoryPool {
>     // other stuff
>     uint[] memory;
> 
>     this() {
>         memory = new uint[someHugeNumber];
>     }
> }
> 
> This would have negligible additional overhead and would allow you to change 
> the
> size of the memory pool at runtime.

This is a bare-bone implementation, reduced from the "extra" module of my dlibs:

struct MemoryPool(T, int MAX_BLOCK_BYTES=1 << 14) {
    struct Block {
        T[(MAX_BLOCK_BYTES / T.sizeof)] items;
    }

    static Block*[] blocks;
    static T* nextFree, lastFree;

    static T* newItem() {
        if (nextFree >= lastFree) {
            blocks.length = blocks.length + 1;
            blocks[$-1] = new Block;

            nextFree = blocks[$-1].items.ptr;
            lastFree = nextFree + Block.items.length;
        }

        return nextFree++;
    }
}

(I have removed many static asserts, some asserts, the freeAll method, 
comments, ddocs, unittests, etc.)
Blocks are allocated from the CG stack. A less safe variant of this struct 
allocates memory from the C heap (it also optionally tests at compile time if T 
contains reference types).
You can see there's a single pool for each type, this can be a disadvantage if 
you want to keep more than one for the same type.

Bye,
bearophile

Reply via email to