On Sunday, 21 July 2013 at 09:16:47 UTC, Jacob Carlborg wrote:
On 2013-07-21 08:45, Namespace wrote:
But D isn't like Ada. It's more like C++ and there Heap allocations is
often used too.
It would be really cool if we had allocators already.
Something like:
----
with (AllocatorX) { /// will use malloc and free instead of calling the GC
    float[] arr;
    arr ~= 42;
}
----

And I still don't know what a 'TLS scratch pad buffer' is.

Perhaps:

float[4000] scratchPadBuffer;

void foo ()
{
    // use scratchPadBuffer here
}

I guess he just refers to a some temporary data you need during the execution of a function and at the end of the function you don't care about it.

But then I have mostly far too much and maybe a few times a bit too less store. It's not flexible. But maybe with a smaller granule.
What's about this:

----
struct Chunk(T, ushort maxSize = 1024) {
public:
        static T[maxSize] _chunk;

        T* ptr;
        size_t length;
        size_t capacity;

        this(size_t capacity) {
                this.capacity = capacity;

                if (capacity < maxSize)
                        this.ptr = &_chunk[0];
                else
                        this.ptr = cast(T*) .malloc(this.capacity * T.sizeof);
        }

        @disable
        this(this);

        ~this() {
                if (this.ptr && this.capacity > maxSize)
                        .free(this.ptr);
        }

        void ensureAddable(size_t capacity) {
                if (capacity > this.capacity) {
                        this.capacity = capacity;

                        if (this.capacity > maxSize)
this.ptr = cast(T*) .realloc(this.ptr, this.capacity * T.sizeof);
                }
        }

        void opOpAssign(string op : "~", U)(auto ref U item) {
                this.ptr[this.length++] = cast(T) item;
        }

void opOpAssign(string op : "~", U, size_t n)(auto ref U[n] items) {
                foreach (ref U item; items) {
                        this.ptr[this.length++] = cast(T) item;
                }
        }
}
----

Reply via email to