On Fri, 24 May 2002, Peter Gibbs wrote:
> My current work is based on counting the size of freed buffers - yes,
> this adds some overhead in free_unused_buffers, but it seems like it
> may be worth it - and only doing compaction runs when a predefined
> fraction of the pool size is available for freeing.
>
> I am also looking at implementing an inflation factor similar to
> Sean's suggestion above. Another option is to allow the block size
> used to expand the pool to grow, so if the pool is growing rapidly, we
> will allocate less larger blocks;

This sounds like a much better idea -- mine was just a quick hack.  But
really I think the way to go would be a generational collector -- after
we've collected a pool once or twice (or after we've failed to recover
more than x% from it on the previous run), we put it off to the side and
compact (indeed, look at) it less often.  New objects are allocated from a
new pool, since they are less likely to live past the next collection than
ones that have already been around awhile.  Maybe adapting the sizes of
new pool requests to the current allocation load makes sense, too.  But my
hunch is that generational collection will save us the most, with
chunk-size being relatively less important.

> Sean's program is ideal as a benchmark for this (after I removed all the
> shl's before the pack's, otherwise I just get files full of nulls).

Darn x86 little-endian bigots...  Seriously, the fact that this is an
issue sounds like at least a small argument for byte-manipulation ops. We
could even go crazy and have superscalar primitives that operate on
byte-strings, e.g. "addv8 S1, S2, S3" would treat S2 and S3 as arrays of
8-bit integers.  More conservatively, endianness-independent pack
templates would be a good thing, e.g. "pack S1, 'SC', I1, I2" would put a
16-bit and an 8-bit value into the string S1.

> I have also been playing with variants of Mike Lambert's proposal

I didn't find this on the web archive -- about when were the messages?

> As usual, all suggestions are most welcome.

In the "fun" department, you might enjoy taking a look at Attardi and
Flagella's CMM, ftp://ftp.di.unipi.it/pub/Papers/attardi/usenix94.ps.gz.
Their concern was to make it possible for many different kinds of
allocation policies to interact, so that the general allocator could
cooperate with specialized allocators that people sometimes use to improve
performance.  (e.g. gcc's obstacks, which are pools for short-lived
objects that are left un-collected, then freed up en masse when gcc knows
it's done with them).  It's unfortunately (for Parrot) in C++, but I
thought it was a really cool idea.

Also included a zip.pasm that won't hurt you for having \0's in your file.

/s

Attachment: zip2.tgz
Description: Binary data

Reply via email to