2.6.32-longterm review patch.  If anyone has any objections, please let us know.

------------------

From: "Rafael J. Wysocki" <r...@sisk.pl>

commit 4d4cf23cdde2f8f9324f5684a7f349e182039529 upstream.

There is a bug in free_unnecessary_pages() that causes it to
attempt to free too many pages in some cases, which triggers the
BUG_ON() in memory_bm_clear_bit() for copy_bm.  Namely, if
count_data_pages() is initially greater than alloc_normal, we get
to_free_normal equal to 0 and "save" greater from 0.  In that case,
if the sum of "save" and count_highmem_pages() is greater than
alloc_highmem, we subtract a positive number from to_free_normal.
Hence, since to_free_normal was 0 before the subtraction and is
an unsigned int, the result is converted to a huge positive number
that is used as the number of pages to free.

Fix this bug by checking if to_free_normal is actually greater
than or equal to the number we're going to subtract from it.

Signed-off-by: Rafael J. Wysocki <r...@sisk.pl>
Reported-and-tested-by: Matthew Garrett <m...@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gre...@suse.de>

---
 kernel/power/snapshot.c |    6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

--- a/kernel/power/snapshot.c
+++ b/kernel/power/snapshot.c
@@ -1193,7 +1193,11 @@ static void free_unnecessary_pages(void)
                to_free_highmem = alloc_highmem - save;
        } else {
                to_free_highmem = 0;
-               to_free_normal -= save - alloc_highmem;
+               save -= alloc_highmem;
+               if (to_free_normal > save)
+                       to_free_normal -= save;
+               else
+                       to_free_normal = 0;
        }
 
        memory_bm_position_reset(&copy_bm);


_______________________________________________
stable mailing list
stable@linux.kernel.org
http://linux.kernel.org/mailman/listinfo/stable

Reply via email to