__GFP_NOFAIL specifies that the page allocator cannot fail to return
memory.  Allocators that call it may not even check for NULL upon
returning.

It turns out GFP_NOWAIT | __GFP_NOFAIL or GFP_ATOMIC | __GFP_NOFAIL can
actually return NULL.  More interestingly, processes that are doing
direct reclaim and have PF_MEMALLOC set may also return NULL for any
__GFP_NOFAIL allocation.

This patch fixes it so that the page allocator never actually returns
NULL as expected for __GFP_NOFAIL.  It turns out that no code actually
does anything as crazy as GFP_ATOMIC | __GFP_NOFAIL currently, so this
is more for correctness than a bug fix for that issue.

Signed-off-by: David Rientjes <[email protected]>
---
 mm/page_alloc.c | 24 +++++++++++++-----------
 1 file changed, 13 insertions(+), 11 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2535,17 +2535,19 @@ rebalance:
                }
        }
 
-       /* Atomic allocations - we can't balance anything */
-       if (!wait)
-               goto nopage;
-
-       /* Avoid recursion of direct reclaim */
-       if (current->flags & PF_MEMALLOC)
-               goto nopage;
-
-       /* Avoid allocations with no watermarks from looping endlessly */
-       if (test_thread_flag(TIF_MEMDIE) && !(gfp_mask & __GFP_NOFAIL))
-               goto nopage;
+       if (likely(!(gfp_mask & __GFP_NOFAIL))) {
+               /* Atomic allocations - we can't balance anything */
+               if (!wait)
+                       goto nopage;
+
+               /* Avoid recursion of direct reclaim */
+               if (current->flags & PF_MEMALLOC)
+                       goto nopage;
+
+               /* Avoid allocations with no watermarks from looping forever */
+               if (test_thread_flag(TIF_MEMDIE))
+                       goto nopage;
+       }
 
        /*
         * Try direct compaction. The first pass is asynchronous. Subsequent
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to