On Fri, Mar 03, 2017 at 10:18:51AM -0500, Johannes Weiner wrote:
> On Fri, Mar 03, 2017 at 11:52:37AM +0900, Minchan Kim wrote:
> > On Tue, Feb 28, 2017 at 04:32:38PM -0800, a...@linux-foundation.org wrote:
> > > 
> > > The patch titled
> > >      Subject: mm: reclaim MADV_FREE pages
> > > has been added to the -mm tree.  Its filename is
> > >      mm-reclaim-madv_free-pages.patch
> > > 
> > > This patch should soon appear at
> > >     
> > > http://ozlabs.org/~akpm/mmots/broken-out/mm-reclaim-madv_free-pages.patch
> > > and later at
> > >     
> > > http://ozlabs.org/~akpm/mmotm/broken-out/mm-reclaim-madv_free-pages.patch
> > > 
> > > Before you just go and hit "reply", please:
> > >    a) Consider who else should be cc'ed
> > >    b) Prefer to cc a suitable mailing list as well
> > >    c) Ideally: find the original patch on the mailing list and do a
> > >       reply-to-all to that, adding suitable additional cc's
> > > 
> > > *** Remember to use Documentation/SubmitChecklist when testing your code 
> > > ***
> > > 
> > > The -mm tree is included into linux-next and is updated
> > > there every 3-4 working days
> > > 
> > > ------------------------------------------------------
> > > From: Shaohua Li <s...@fb.com>
> > > Subject: mm: reclaim MADV_FREE pages
> > > 
> > > When memory pressure is high, we free MADV_FREE pages.  If the pages are
> > > not dirty in pte, the pages could be freed immediately.  Otherwise we
> > > can't reclaim them.  We put the pages back to anonumous LRU list (by
> > > setting SwapBacked flag) and the pages will be reclaimed in normal swapout
> > > way.
> > > 
> > > We use normal page reclaim policy.  Since MADV_FREE pages are put into
> > > inactive file list, such pages and inactive file pages are reclaimed
> > > according to their age.  This is expected, because we don't want to
> > > reclaim too many MADV_FREE pages before used once pages.
> > > 
> > > Based on Minchan's original patch
> > > 
> > > Link: 
> > > http://lkml.kernel.org/r/14b8eb1d3f6bf6cc492833f183ac8c304e560484.1487965799.git.s...@fb.com
> > > Signed-off-by: Shaohua Li <s...@fb.com>
> > > Acked-by: Minchan Kim <minc...@kernel.org>
> > > Acked-by: Michal Hocko <mho...@suse.com>
> > > Acked-by: Johannes Weiner <han...@cmpxchg.org>
> > > Acked-by: Hillf Danton <hillf...@alibaba-inc.com>
> > > Cc: Hugh Dickins <hu...@google.com>
> > > Cc: Rik van Riel <r...@redhat.com>
> > > Cc: Mel Gorman <mgor...@techsingularity.net>
> > > Signed-off-by: Andrew Morton <a...@linux-foundation.org>
> > > ---
> > 
> > < snip >
> > 
> > > @@ -1419,11 +1413,21 @@ static int try_to_unmap_one(struct page
> > >                   VM_BUG_ON_PAGE(!PageSwapCache(page) && 
> > > PageSwapBacked(page),
> > >                           page);
> > >  
> > > -                 if (!PageDirty(page)) {
> > > +                 /*
> > > +                  * swapin page could be clean, it has data stored in
> > > +                  * swap. We can't silently discard it without setting
> > > +                  * swap entry in the page table.
> > > +                  */
> > > +                 if (!PageDirty(page) && !PageSwapCache(page)) {
> > >                           /* It's a freeable page by MADV_FREE */
> > >                           dec_mm_counter(mm, MM_ANONPAGES);
> > > -                         rp->lazyfreed++;
> > >                           goto discard;
> > > +                 } else if (!PageSwapBacked(page)) {
> > > +                         /* dirty MADV_FREE page */
> > > +                         set_pte_at(mm, address, pvmw.pte, pteval);
> > > +                         ret = SWAP_DIRTY;
> > > +                         page_vma_mapped_walk_done(&pvmw);
> > > +                         break;
> > >                   }
> > 
> > There is no point to make this logic complicated with clean swapin-page.
> > 
> > Andrew,
> > Could you fold below patch into the mm-reclaim-madv_free-pages.patch
> > if others are not against?
> > 
> > Thanks.
> > 
> > From 0c28f6560fbc4e65da4f4a8cc4664ab9f7b11cf3 Mon Sep 17 00:00:00 2001
> > From: Minchan Kim <minc...@kernel.org>
> > Date: Fri, 3 Mar 2017 11:42:52 +0900
> > Subject: [PATCH] mm: clean up lazyfree page handling
> > 
> > We can make it simple to understand without need to be aware of
> > clean-swapin page.
> > This patch just clean up lazyfree page handling in try_to_unmap_one.
> > 
> > Signed-off-by: Minchan Kim <minc...@kernel.org>
> 
> Agreed, this is a litle easier to follow.
> 
> Acked-by: Johannes Weiner <han...@cmpxchg.org>

Thanks, Johannes.

> 
> > ---
> >  mm/rmap.c | 22 +++++++++++-----------
> >  1 file changed, 11 insertions(+), 11 deletions(-)
> > 
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index bb45712..f7eab40 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1413,17 +1413,17 @@ static int try_to_unmap_one(struct page *page, 
> > struct vm_area_struct *vma,
> >                     VM_BUG_ON_PAGE(!PageSwapCache(page) && 
> > PageSwapBacked(page),
> >                             page);
> 
> Since you're removing the PageSwapCache() check and we're now assuming
> that !swapbacked is not in the swapcache, can you modify this to check
> PageSwapBacked(page) != PageSwapCache(page)?
> 
> Better yet, change it into a warning and SWAP_FAIL.

Maybe, what you wanted is

 !!PageSwapBacked(page) != !!PageSwapCache(page)

Personally, I prefer && style rather than equation expression
in this case.

How about this?
If others are not against, I will resend it to Andrew with
Acked|Reviewed-by all I got until now.

Thanks.

commit 118cfee42600
Author: Minchan Kim <minc...@kernel.org>
Date:   Sat Mar 4 01:01:38 2017 +0000

    mm: clean up lazyfree page handling
    
    We can make it simple to understand without need to be aware of
    clean-swapin page.
    This patch just clean up lazyfree page handling in try_to_unmap_one.
    
    Link: http://lkml.kernel.org/r/20170303025237.GB3503@bbox
    Signed-off-by: Minchan Kim <minc...@kernel.org>
    Cc: Shaohua Li <s...@fb.com>
    Cc: Michal Hocko <mho...@suse.com>
    Cc: Johannes Weiner <han...@cmpxchg.org>
    Cc: Hillf Danton <hillf...@alibaba-inc.com>
    Cc: Hugh Dickins <hu...@google.com>
    Cc: Rik van Riel <r...@redhat.com>
    Cc: Mel Gorman <mgor...@techsingularity.net>
    Signed-off-by: Andrew Morton <a...@linux-foundation.org>

diff --git a/mm/rmap.c b/mm/rmap.c
index 3d86036d96ec..1377f7b0361e 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -1413,20 +1413,24 @@ static int try_to_unmap_one(struct page *page, struct 
vm_area_struct *vma,
                         * Store the swap location in the pte.
                         * See handle_pte_fault() ...
                         */
-                       VM_BUG_ON_PAGE(!PageSwapCache(page) && 
PageSwapBacked(page),
-                               page);
+                       if (VM_WARN_ON_ONCE(PageSwapBacked(page) &&
+                                               !PageSwapCache(page))) {
+                               ret = SWAP_FAIL;
+                               page_vma_mapped_walk_done(&pvmw);
+                               break;
+                       }
 
-                       /*
-                        * swapin page could be clean, it has data stored in
-                        * swap. We can't silently discard it without setting
-                        * swap entry in the page table.
-                        */
-                       if (!PageDirty(page) && !PageSwapCache(page)) {
-                               /* It's a freeable page by MADV_FREE */
-                               dec_mm_counter(mm, MM_ANONPAGES);
-                               goto discard;
-                       } else if (!PageSwapBacked(page)) {
-                               /* dirty MADV_FREE page */
+                       /* MADV_FREE page check */
+                       if (!PageSwapBacked(page)) {
+                               if (!PageDirty(page)) {
+                                       dec_mm_counter(mm, MM_ANONPAGES);
+                                       goto discard;
+                               }
+
+                               /*
+                                * If the page was redirtied, it cannot be
+                                * discarded. Remap the page to page table.
+                                */
                                set_pte_at(mm, address, pvmw.pte, pteval);
                                ret = SWAP_DIRTY;
                                page_vma_mapped_walk_done(&pvmw);

Reply via email to