[Martin v. Löwis] > One way (I think the only way) this could happen if: > - the objects being allocated are all smaller than 256 bytes > - when allocating new objects, the requested size was different > from any other size previously deallocated. > > So if you first allocate 1,000,000 objects of size 200, and then > release them, and then allocate 1,000,000 objects of size 208, > the memory is not reused.
Nope, the memory is reused in this case. While each obmalloc "pool" P is devoted to a fixed size so long as at least one object from P is in use, when all objects allocated from P have been released, P can be reassigned to any other size class. The comments in obmalloc.c are quite accurate. This particular case is talked about here: """ empty == all the pool's blocks are currently available for allocation On transition to empty, a pool is unlinked from its usedpools[] list, and linked to the front of the (file static) singly-linked freepools list, via its nextpool member. The prevpool member has no meaning in this case. Empty pools have no inherent size class: the next time a malloc finds an empty list in usedpools[], it takes the first pool off of freepools. If the size class needed happens to be the same as the size class the pool last had, some pool initialization can be skipped. """ Now if you end up allocating a million pools all devoted to 72-byte objects, and leave one object from each pool in use, then all those pools remain devoted to 72-byte objects. Wholly empty pools can be (and do get) reused freely, though. > If the objects are all of same size, or all larger than 256 bytes, > this effect does not occur. If they're larger than 256 bytes, then you see the reuse behavior of the system malloc/free, about which virtually nothing can be said that's true across all Python platforms. _______________________________________________ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com