My problem is essentially that freeing large numbers of small chunks
of memory can be very slow. I have run into this problem twice so far.

1) Shutting down python can take several minutes if I have used large
dictionaries. The solution I use here is to exit python without
freeing the allocated memory (not really a good solution).

2) Freeing large hashtables in C. (No solution yet.)

For these hashtables I can fairly easily divide the data into groups
which could be deleted together. If I have all this data in one
predefined region of memory then deleting them would be very
fast. However in order to keep memory consumption as low as possible
without sacrificing speed I am using Judy arrays (see the Judy project
at source forge). But that means I have no direct control over how
malloc is called.

One solution would be to divide the memory in larger regions and to
tell malloc which chunk to use for the next few calls, respectively when a
whole chunk could be freed. But I don't know how to do this.

Cyclone's regions seem to provide more or less what I need but cyclone
works on neither CURRENT nor on amd64. 

Any suggestions where to look/what to read are greatly appreciated 

- Till


_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to