While testing on a large Google binary, I noticed that libbacktrace is allocating an inordinate amount of memory. The binary winds up with 377,944 entries in the unit_addrs vector. Each entry is 24 bytes, so this is 9,070,656 bytes, which is not too terrible. Unfortunately, for some reason I thought that when a libbacktrace vector is larger than a page the code should only allocate one additional page at a time. This vector requires 2215 4096-byte pages. Growing the vector one page at a time allocates a total of something like (2215 * 2214) / 2 pages, which turns out to be nearly 1.5G. Allocating 1.5G to represent a vector of size 9M is not desirable.
It's true that when the vector grows, the old memory can be reused. But there is nothing in libbacktrace that is going to reuse that much memory. And even worse, there was a bug in the vector_grow routine that caused it to fail to correctly report the size of the old vector, so the memory had no chance of being reused anyhow. This patch fixes vector growth to double the number of pages requested each time. It fixes vector growth to record the correct size of the old vector being freed. The patch also adds some code to simply munmap large blocks of allocated memory. It's unlikely in practice that libbacktrace will ever be able to reuse a large block, so it's probably better to hand the memory back rather than hold onto it for no purpose. Bootstrapped and tested on x86_64-unknown-linux-gnu. Committed to 4.9 branch and mainline. Ian 2014-05-08 Ian Lance Taylor <i...@google.com> * mmap.c (backtrace_free): If freeing a large aligned block of memory, call munmap rather than holding onto it. (backtrace_vector_grow): When growing a vector, double the number of pages requested. When releasing the old version of a grown vector, pass the correct size to backtrace_free.
Index: ChangeLog =================================================================== --- ChangeLog (revision 210248) +++ ChangeLog (working copy) @@ -1,3 +1,11 @@ +2014-05-08 Ian Lance Taylor <i...@google.com> + + * mmap.c (backtrace_free): If freeing a large aligned block of + memory, call munmap rather than holding onto it. + (backtrace_vector_grow): When growing a vector, double the number + of pages requested. When releasing the old version of a grown + vector, pass the correct size to backtrace_free. + 2014-03-07 Ian Lance Taylor <i...@google.com> * sort.c (backtrace_qsort): Use middle element as pivot. Index: mmap.c =================================================================== --- mmap.c (revision 210248) +++ mmap.c (working copy) @@ -164,6 +164,26 @@ backtrace_free (struct backtrace_state * { int locked; + /* If we are freeing a large aligned block, just release it back to + the system. This case arises when growing a vector for a large + binary with lots of debug info. Calling munmap here may cause us + to call mmap again if there is also a large shared library; we + just live with that. */ + if (size >= 16 * 4096) + { + size_t pagesize; + + pagesize = getpagesize (); + if (((uintptr_t) addr & (pagesize - 1)) == 0 + && (size & (pagesize - 1)) == 0) + { + /* If munmap fails for some reason, just add the block to + the freelist. */ + if (munmap (addr, size) == 0) + return; + } + } + /* If we can acquire the lock, add the new space to the free list. If we can't acquire the lock, just leak the memory. __sync_lock_test_and_set returns the old state of the lock, so we @@ -209,14 +229,18 @@ backtrace_vector_grow (struct backtrace_ alc = pagesize; } else - alc = (alc + pagesize - 1) & ~ (pagesize - 1); + { + alc *= 2; + alc = (alc + pagesize - 1) & ~ (pagesize - 1); + } base = backtrace_alloc (state, alc, error_callback, data); if (base == NULL) return NULL; if (vec->base != NULL) { memcpy (base, vec->base, vec->size); - backtrace_free (state, vec->base, vec->alc, error_callback, data); + backtrace_free (state, vec->base, vec->size + vec->alc, + error_callback, data); } vec->base = base; vec->alc = alc - vec->size;