This is a note to let you know that I've just added the patch titled
[stable] [PATCH 3/3] mm: compaction: abort compaction if too many pages are
isolated and caller is asynchronous V2
to the 2.6.39-stable tree which can be found at:
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary
The filename of the patch is:
mm-compaction-abort-compaction-if-too-many-pages-are-isolated-and-caller-is-asynchronous-v2.patch
and it can be found in the queue-2.6.39 subdirectory.
If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.
>From [email protected] Mon Aug 1 11:42:31 2011
From: Mel Gorman <[email protected]>
Date: Tue, 19 Jul 2011 10:15:51 +0100
Subject: [stable] [PATCH 3/3] mm: compaction: abort compaction if too many
pages are isolated and caller is asynchronous V2
To: [email protected]
Cc: Andrea Arcangeli <[email protected]>, Andrew Morton
<[email protected]>, Thomas Sattler <[email protected]>, Mel Gorman
<[email protected]>
Message-ID: <[email protected]>
From: Mel Gorman <[email protected]>
commit: f9e35b3b41f47c4e17d8132edbcab305a6aaa4b0 upstream
Asynchronous compaction is used when promoting to huge pages. This is all
very nice but if there are a number of processes in compacting memory, a
large number of pages can be isolated. An "asynchronous" process can
stall for long periods of time as a result with a user reporting that
firefox can stall for 10s of seconds. This patch aborts asynchronous
compaction if too many pages are isolated as it's better to fail a
hugepage promotion than stall a process.
[[email protected]: return COMPACT_PARTIAL for abort]
Reported-and-tested-by: Ury Stankevich <[email protected]>
Signed-off-by: Mel Gorman <[email protected]>
Reviewed-by: Minchan Kim <[email protected]>
Reviewed-by: Michal Hocko <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
mm/compaction.c | 29 ++++++++++++++++++++++++-----
1 file changed, 24 insertions(+), 5 deletions(-)
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -251,11 +251,18 @@ static bool too_many_isolated(struct zon
return isolated > (inactive + active) / 2;
}
+/* possible outcome of isolate_migratepages */
+typedef enum {
+ ISOLATE_ABORT, /* Abort compaction now */
+ ISOLATE_NONE, /* No pages isolated, continue scanning */
+ ISOLATE_SUCCESS, /* Pages isolated, migrate */
+} isolate_migrate_t;
+
/*
* Isolate all pages that can be migrated from the block pointed to by
* the migrate scanner within compact_control.
*/
-static unsigned long isolate_migratepages(struct zone *zone,
+static isolate_migrate_t isolate_migratepages(struct zone *zone,
struct compact_control *cc)
{
unsigned long low_pfn, end_pfn;
@@ -272,7 +279,7 @@ static unsigned long isolate_migratepage
/* Do not cross the free scanner or scan within a memory hole */
if (end_pfn > cc->free_pfn || !pfn_valid(low_pfn)) {
cc->migrate_pfn = end_pfn;
- return 0;
+ return ISOLATE_NONE;
}
/*
@@ -281,10 +288,14 @@ static unsigned long isolate_migratepage
* delay for some time until fewer pages are isolated
*/
while (unlikely(too_many_isolated(zone))) {
+ /* async migration should just abort */
+ if (!cc->sync)
+ return ISOLATE_ABORT;
+
congestion_wait(BLK_RW_ASYNC, HZ/10);
if (fatal_signal_pending(current))
- return 0;
+ return ISOLATE_ABORT;
}
/* Time to isolate some pages for migration */
@@ -369,7 +380,7 @@ static unsigned long isolate_migratepage
trace_mm_compaction_isolate_migratepages(nr_scanned, nr_isolated);
- return cc->nr_migratepages;
+ return ISOLATE_SUCCESS;
}
/*
@@ -533,8 +544,15 @@ static int compact_zone(struct zone *zon
unsigned long nr_migrate, nr_remaining;
int err;
- if (!isolate_migratepages(zone, cc))
+ switch (isolate_migratepages(zone, cc)) {
+ case ISOLATE_ABORT:
+ ret = COMPACT_PARTIAL;
+ goto out;
+ case ISOLATE_NONE:
continue;
+ case ISOLATE_SUCCESS:
+ ;
+ }
nr_migrate = cc->nr_migratepages;
err = migrate_pages(&cc->migratepages, compaction_alloc,
@@ -558,6 +576,7 @@ static int compact_zone(struct zone *zon
}
+out:
/* Release free pages and check accounting */
cc->nr_freepages -= release_freepages(&cc->freepages);
VM_BUG_ON(cc->nr_freepages != 0);
Patches currently in stable-queue which might be from [email protected] are
queue-2.6.39/mm-vmscan-evaluate-the-watermarks-against-the-correct.patch
queue-2.6.39/mm-compaction-ensure-that-the-compaction-free-scanner-does-not-move-to-the-next-zone.patch
queue-2.6.39/mm-vmscan-do-not-apply-pressure-to-slab-if-we-are-not-applying-pressure-to-zone.patch
queue-2.6.39/mm-vmscan-do-not-use-page_count-without-a-page-pin.patch
queue-2.6.39/vmscan-fix-a-livelock-in-kswapd.patch
queue-2.6.39/mm-vmscan-correct-check-for-kswapd-sleeping-in.patch
queue-2.6.39/mm-compaction-abort-compaction-if-too-many-pages-are-isolated-and-caller-is-asynchronous-v2.patch
queue-2.6.39/mm-vmscan-only-read-new_classzone_idx-from-pgdat-when-reclaiming-successfully.patch
_______________________________________________
stable mailing list
[email protected]
http://linux.kernel.org/mailman/listinfo/stable