I'm really evil, so I changed the loop in compact_capture_page() to
basically steal the highest-order page it can.  This shouldn't _break_
anything, but it does ensure that we'll be splitting pages that we find
more often and recreating this *MUCH* faster:

-               for (order = cc->order; order < MAX_ORDER; order++) {
+               for (order = MAX_ORDER - 1; order >= cc->order;order--)

I also augmented the area in capture_free_page() that I expect to be
leaking:

        if (alloc_order != order) {
                static int leaked_pages = 0;
                leaked_pages += 1<<order;
                leaked_pages -= 1<<alloc_order;
                printk("%s() alloc_order(%d) != order(%d) leaked %d\n",
                                __func__, alloc_order, order,
                                leaked_pages);
                expand(zone, page, alloc_order, order,
                        &zone->free_area[order], migratetype);
        }

I add up all the fields in buddyinfo to figure out how much _should_ be
in the allocator and then compare it to MemFree to get a guess at how
much is leaked.  That number correlates _really_ well with the
"leaked_pages" variable above.  That pretty much seals it for me.

I'll run a stress test overnight to see if it pops up again.  The patch
I'm running is attached.  I'll send a properly changelogged one tomorrow
if it works.


---

 linux-2.6.git-dave/mm/page_alloc.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff -puN mm/page_alloc.c~leak-fix-20121120-1 mm/page_alloc.c
--- linux-2.6.git/mm/page_alloc.c~leak-fix-20121120-1	2012-11-20 19:44:09.588966346 -0500
+++ linux-2.6.git-dave/mm/page_alloc.c	2012-11-20 19:44:21.993057915 -0500
@@ -1405,7 +1405,7 @@ int capture_free_page(struct page *page,
 
 	mt = get_pageblock_migratetype(page);
 	if (unlikely(mt != MIGRATE_ISOLATE))
-		__mod_zone_freepage_state(zone, -(1UL << order), mt);
+		__mod_zone_freepage_state(zone, -(1UL << alloc_order), mt);
 
 	if (alloc_order != order)
 		expand(zone, page, alloc_order, order,
_

Reply via email to