On Wed, Jul 19, 2006 at 10:06:10PM +0930, Glen Turner wrote:
> Can't say I'm shocked.  Think about the accounting overhead of what
> you're trying to do.  You're hitting some accounting data structure
> that doesn't scale above a few million entries.

I don't think that's what is happening here.  I did 2GiB worth of 160
byte mallocs and timed them all, it ends up looking like

http://www.gelato.unsw.edu.au/~ianw/malloc-test/malloc.png

Real outliers are probably context switches, certainly seems to bunch
up a bit over the 12 million mark, might be fun to find out why.

I'd suggest profiling the code, or use strace or something.  I agree
that you probably want to be allocating from a pool (but then again
that's what malloc does, and it probably does it better than most
people can write it :)

-i
_______________________________________________
coders mailing list
[email protected]
http://lists.slug.org.au/listinfo/coders

Reply via email to