Vladimir Panteleev wrote:
On Sat, 06 Jun 2009 18:12:58 +0300, Jarrett Billingsley <jarrett.billings...@gmail.com> wrote:

On Sat, Jun 6, 2009 at 8:03 AM, Vladimir
Panteleev<thecybersha...@gmail.com> wrote:
// Works for DMD1/Phobos, DMD1/Tango and DMD2/Phobos
version(Tango) import tango.io.Console;
else           import std.stdio;

struct S
{
       ubyte[40_000] data;
}

void main()
{
       S[] a;
       a ~= S();

// QUESTION: How much memory will this program consume upon reaching
this point?
       version(Tango) Cin.copyln();
       else           readln();
}


There seems to be something wrong with the newCapacity function that
_d_arrayappendcT calls.  From an element size of 20000 (I halved it
just to make the allocation faster) and an array length of 1, it
somehow calculates the new size to be 266686600.  Hm.  That seems a
bit off.

It seems this line:

long mult = 100 + (1000L * size) / log2plus1(newcap);

is to blame.  I don't think large value types were taken into account
here.  The resulting multiplier is 1,333,433, which is hilariously
large.

Bulls-eye. I think mr. Dave Fladebo deserves a math lesson or something, and mr. Walter Bright should review patches that go into the language's runtime more carefully. :P

Can we discuss a suitable replacement? I suggested in #d that we look at how other platforms do it. For example, Java's Vector just doubles its capacity by default ( http://java.sun.com/javase/6/docs/api/java/util/Vector.html ).


In my own array classes I'm using Python's: size += max(size>>3, 8);
Using this, you won't end up wasting a lot of memory. Preallocating is always preferred anyway.

L.

Reply via email to