Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Thu 08-12-16 21:53:44, Tetsuo Handa wrote:
> > > If we could agree
> > > with calling __alloc_pages_nowmark() before out_of_memory() if 
> > > __GFP_NOFAIL
> > > is given, we can avoid locking up while minimizing possibility of invoking
> > > the OOM killer...
> >
> > I do not understand. We do __alloc_pages_nowmark even when oom is called
> > for GFP_NOFAIL.
> 
> Where is that? I can find __alloc_pages_nowmark() after out_of_memory()
> if __GFP_NOFAIL is given, but I can't find __alloc_pages_nowmark() before
> out_of_memory() if __GFP_NOFAIL is given.
> 
> What I mean is below patch folded into
> "[PATCH 1/2] mm: consolidate GFP_NOFAIL checks in the allocator slowpath".
> 
Oops, I wrongly implemented "__alloc_pages_nowmark() before out_of_memory() if
__GFP_NOFAIL is given." case. Updated version is shown below.

--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3116,23 +3116,27 @@ void warn_alloc(gfp_t gfp_mask, const char *fmt, ...)
                /* The OOM killer may not free memory on a specific node */
                if (gfp_mask & __GFP_THISNODE)
                        goto out;
+       } else {
+               /*
+                * Help non-failing allocations by giving them access to memory
+                * reserves
+                */
+               page = get_page_from_freelist(gfp_mask, order,
+                                             ALLOC_NO_WATERMARKS|ALLOC_CPUSET, 
ac);
+               /*
+                * fallback to ignore cpuset restriction if our nodes
+                * are depleted
+                */
+               if (!page)
+                       page = get_page_from_freelist(gfp_mask, order,
+                                                     ALLOC_NO_WATERMARKS, ac);
+               if (page)
+                       goto out;
        }
+
        /* Exhausted what can be done so it's blamo time */
-       if (out_of_memory(&oc) || WARN_ON_ONCE(gfp_mask & __GFP_NOFAIL)) {
+       if (out_of_memory(&oc))
                *did_some_progress = 1;
-
-               if (gfp_mask & __GFP_NOFAIL) {
-                       page = get_page_from_freelist(gfp_mask, order,
-                                       ALLOC_NO_WATERMARKS|ALLOC_CPUSET, ac);
-                       /*
-                        * fallback to ignore cpuset restriction if our nodes
-                        * are depleted
-                        */
-                       if (!page)
-                               page = get_page_from_freelist(gfp_mask, order,
-                                       ALLOC_NO_WATERMARKS, ac);
-               }
-       }
 out:
        mutex_unlock(&oom_lock);
        return page;
@@ -3738,6 +3742,11 @@ bool gfp_pfmemalloc_allowed(gfp_t gfp_mask)
                 */
                WARN_ON_ONCE(order > PAGE_ALLOC_COSTLY_ORDER);
 
+               /* Try memory reserves and then start killing things. */
+               page = __alloc_pages_may_oom(gfp_mask, order, ac, 
&did_some_progress);
+               if (page)
+                       goto got_pg;
+
                cond_resched();
                goto retry;
        }

I'm calling __alloc_pages_may_oom() from nopage: because we reach without
calling __alloc_pages_may_oom(), for PATCH 1/2 is not for stop enforcing
the OOM killer for __GFP_NOFAIL.

Reply via email to