On Tue 08-03-16 10:52:15, Vlastimil Babka wrote:
> On 03/08/2016 10:46 AM, Michal Hocko wrote:
[...]
> >>> @@ -3294,6 +3289,18 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned 
> >>> int order,
> >>>                            did_some_progress > 0, no_progress_loops))
> >>>           goto retry;
> >>>  
> >>> + /*
> >>> +  * !costly allocations are really important and we have to make sure
> >>> +  * the compaction wasn't deferred or didn't bail out early due to locks
> >>> +  * contention before we go OOM.
> >>> +  */
> >>> + if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
> >>> +         if (compact_result <= COMPACT_CONTINUE)
> >>
> >> Same here.
> >> I was going to say that this didn't have effect on Sergey's test, but
> >> turns out it did :)
> > 
> > This should work as expected because compact_result is unsigned long
> > and so this is the unsigned arithmetic. I can make
> > #define COMPACT_NONE            -1UL
> > 
> > to make the intention more obvious if you prefer, though.
> 
> Well, what wasn't obvious to me is actually that here (unlike in the
> test above) it was actually intended that COMPACT_NONE doesn't result in
> a retry. But it makes sense, otherwise we would retry endlessly if
> reclaim couldn't form a higher-order page, right.

Yeah, that was the whole point. An alternative would be moving the test
into should_compact_retry(order, compact_result, contended_compaction)
which would be CONFIG_COMPACTION specific so we can get rid of the
COMPACT_NONE altogether. Something like the following. We would lose the
always initialized compact_result but this would matter only for
order==0 and we check for that. Even gcc doesn't complain.

A more important question is whether the criteria I have chosen are
reasonable and reasonably independent on the particular implementation
of the compaction. I still cannot convince myself about the convergence
here. Is it possible that the compaction would keep returning 
compact_result <= COMPACT_CONTINUE while not making any progress at all?

Sure we can see a case where somebody is stealing the compacted blocks
but that is very same with the order-0 where parallel mem eaters will
piggy back on the reclaimer and there is no upper boundary as well well.

---
diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index a4cec4a03f7d..4cd4ddf64cc7 100644
--- a/include/linux/compaction.h
+++ b/include/linux/compaction.h
@@ -1,8 +1,6 @@
 #ifndef _LINUX_COMPACTION_H
 #define _LINUX_COMPACTION_H
 
-/* compaction disabled */
-#define COMPACT_NONE           -1
 /* Return values for compact_zone() and try_to_compact_pages() */
 /* compaction didn't start as it was deferred due to past failures */
 #define COMPACT_DEFERRED       0
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f89e3cbfdf90..c5932a218fc6 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2823,10 +2823,8 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned 
int order,
 {
        struct page *page;
 
-       if (!order) {
-               *compact_result = COMPACT_NONE;
+       if (!order)
                return NULL;
-       }
 
        current->flags |= PF_MEMALLOC;
        *compact_result = try_to_compact_pages(gfp_mask, order, alloc_flags, ac,
@@ -2864,6 +2862,25 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned 
int order,
 
        return NULL;
 }
+
+static inline bool
+should_compact_retry(unsigned int order, unsigned long compact_result,
+                    int contended_compaction)
+{
+       /*
+        * !costly allocations are really important and we have to make sure
+        * the compaction wasn't deferred or didn't bail out early due to locks
+        * contention before we go OOM.
+        */
+       if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
+               if (compact_result <= COMPACT_CONTINUE)
+                       return true;
+               if (contended_compaction > COMPACT_CONTENDED_NONE)
+                       return true;
+       }
+
+       return false;
+}
 #else
 static inline struct page *
 __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned int order,
@@ -2871,9 +2888,15 @@ __alloc_pages_direct_compact(gfp_t gfp_mask, unsigned 
int order,
                enum migrate_mode mode, int *contended_compaction,
                unsigned long *compact_result)
 {
-       *compact_result = COMPACT_NONE;
        return NULL;
 }
+
+static inline bool
+should_compact_retry(unsigned int order, unsigned long compact_result,
+                    int contended_compaction)
+{
+       return false;
+}
 #endif /* CONFIG_COMPACTION */
 
 /* Perform direct synchronous page reclaim */
@@ -3289,17 +3312,8 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int 
order,
                                 did_some_progress > 0, no_progress_loops))
                goto retry;
 
-       /*
-        * !costly allocations are really important and we have to make sure
-        * the compaction wasn't deferred or didn't bail out early due to locks
-        * contention before we go OOM.
-        */
-       if (order && order <= PAGE_ALLOC_COSTLY_ORDER) {
-               if (compact_result <= COMPACT_CONTINUE)
-                       goto retry;
-               if (contended_compaction > COMPACT_CONTENDED_NONE)
-                       goto retry;
-       }
+       if (should_compact_retry(order, compact_result, contended_compaction))
+               goto retry;
 
        /* Reclaim has failed us, start killing things */
        page = __alloc_pages_may_oom(gfp_mask, order, ac, &did_some_progress);
-- 
Michal Hocko
SUSE Labs

Reply via email to