http://d.puremagic.com/issues/show_bug.cgi?id=5623



--- Comment #9 from David Simcha <dsim...@yahoo.com> 2011-02-21 16:12:54 PST ---
Here's another benchmark.  This one's designed more to be similar to reasonably
common scientific computing/large allocation intensive use cases with
moderately large overall heap size, rather than to highlight the specific
performance problem at hand.

import std.random, core.memory, std.datetime, std.stdio;

enum nIter = 1000;

void main() {
    auto ptrs = new void*[1024];

    auto sw = StopWatch(autoStart);

    // Allocate 1024 large blocks with size uniformly distributed between 1
    // and 128 kilobytes.
    foreach(i; 0..nIter) {
        foreach(ref ptr; ptrs) {
            ptr = GC.malloc(uniform(1024, 128 * 1024 + 1), GC.BlkAttr.NO_SCAN);
        }
    }

    writefln("Done %s iter in %s milliseconds.", nIter, sw.peek.msecs);
}

With patch:

Done 1000 iter in 7410 milliseconds.

Without patch:

Done 1000 iter in 38737 milliseconds.

Memory usage is about the same (based on looking at task manager):  About 88
megs w/ patch, about 92 without. 

To verify that this patch doesn't have deleterious effects when allocating lots
of objects close to the border between "large" and "small", I also tried it
with uniform(1024, 8 * 1024 + 1) and got:

With patch:

Done 1000 iter in 2122 milliseconds.

Without patch:

Done 1000 iter in 4153 milliseconds.

The relatively small difference here isn't surprising, as there are no huge
allocations being done (max size is 8k).  Again, the difference in memory usage
was negligible (within epsilon of 12 MB for both).

-- 
Configure issuemail: http://d.puremagic.com/issues/userprefs.cgi?tab=email
------- You are receiving this mail because: -------

Reply via email to