On Thu, Feb 23, 2017 at 11:13:42AM -0500, Johannes Weiner wrote:
> On Wed, Feb 22, 2017 at 10:50:42AM -0800, Shaohua Li wrote:
> > @@ -1424,6 +1424,12 @@ static int try_to_unmap_one(struct page *page, 
> > struct vm_area_struct *vma,
> >                             dec_mm_counter(mm, MM_ANONPAGES);
> >                             rp->lazyfreed++;
> >                             goto discard;
> > +                   } else if (!PageSwapBacked(page)) {
> > +                           /* dirty MADV_FREE page */
> > +                           set_pte_at(mm, address, pvmw.pte, pteval);
> > +                           ret = SWAP_DIRTY;
> > +                           page_vma_mapped_walk_done(&pvmw);
> > +                           break;
> >                     }
> >  
> >                     if (swap_duplicate(entry) < 0) {
> > @@ -1525,8 +1531,8 @@ int try_to_unmap(struct page *page, enum ttu_flags 
> > flags)
> >  
> >     if (ret != SWAP_MLOCK && !page_mapcount(page)) {
> >             ret = SWAP_SUCCESS;
> > -           if (rp.lazyfreed && !PageDirty(page))
> > -                   ret = SWAP_LZFREE;
> > +           if (rp.lazyfreed && PageDirty(page))
> > +                   ret = SWAP_DIRTY;
> 
> Can this actually happen? If the page is dirty, ret should already be
> SWAP_DIRTY, right? How would a dirty page get fully unmapped?
> 
> It seems to me rp.lazyfreed can be removed entirely now that we don't
> have to identify the lazyfree case anymore. The failure case is much
> easier to identify - all it takes is a single pte to be dirty.

ok, I get mixed up. Yes, this couldn't happen any more since we changed the
behavior of try_to_unmap_one. Will delete this in next post.

Thanks,
Shaohua

Reply via email to