Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Minchan Kim
On Tue, Mar 07, 2017 at 06:17:47PM +0300, Kirill A. Shutemov wrote:
> On Thu, Mar 02, 2017 at 03:39:19PM +0900, Minchan Kim wrote:
> > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> > the page if the page is not pte-mapped THP which cannot be
> > mlocked, either.
> > 
> > With that, __munlock_isolated_page can use PageMlocked to check
> > whether try_to_munlock is successful or not without relying on
> > try_to_munlock's retval. It helps to make ttu/ttuo simple with
> > upcoming patches.
> 
> I *think* you're correct, but it took time to wrap my head around.
> We basically rely on try_to_munlock() never caller for PTE-mapped THP.
> And we don't at the moment.
> 
> It worth adding something like
> 
>   VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);
> 
> into try_to_munlock().

Agree.

> 
> Otherwise looks good to me.
> 
> Will free adding my Acked-by once this nit is addressed.

Thanks for the review this part, Kirill!

> 
> -- 
>  Kirill A. Shutemov
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Minchan Kim
On Tue, Mar 07, 2017 at 06:17:47PM +0300, Kirill A. Shutemov wrote:
> On Thu, Mar 02, 2017 at 03:39:19PM +0900, Minchan Kim wrote:
> > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> > the page if the page is not pte-mapped THP which cannot be
> > mlocked, either.
> > 
> > With that, __munlock_isolated_page can use PageMlocked to check
> > whether try_to_munlock is successful or not without relying on
> > try_to_munlock's retval. It helps to make ttu/ttuo simple with
> > upcoming patches.
> 
> I *think* you're correct, but it took time to wrap my head around.
> We basically rely on try_to_munlock() never caller for PTE-mapped THP.
> And we don't at the moment.
> 
> It worth adding something like
> 
>   VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);
> 
> into try_to_munlock().

Agree.

> 
> Otherwise looks good to me.
> 
> Will free adding my Acked-by once this nit is addressed.

Thanks for the review this part, Kirill!

> 
> -- 
>  Kirill A. Shutemov
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majord...@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: mailto:"d...@kvack.org;> em...@kvack.org 


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Kirill A. Shutemov
On Thu, Mar 02, 2017 at 03:39:19PM +0900, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.
> 
> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> upcoming patches.

I *think* you're correct, but it took time to wrap my head around.
We basically rely on try_to_munlock() never caller for PTE-mapped THP.
And we don't at the moment.

It worth adding something like

VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);

into try_to_munlock().

Otherwise looks good to me.

Will free adding my Acked-by once this nit is addressed.

-- 
 Kirill A. Shutemov


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Kirill A. Shutemov
On Thu, Mar 02, 2017 at 03:39:19PM +0900, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.
> 
> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> upcoming patches.

I *think* you're correct, but it took time to wrap my head around.
We basically rely on try_to_munlock() never caller for PTE-mapped THP.
And we don't at the moment.

It worth adding something like

VM_BUG_ON_PAGE(PageCompound(page) && PageDoubleMap(page), page);

into try_to_munlock().

Otherwise looks good to me.

Will free adding my Acked-by once this nit is addressed.

-- 
 Kirill A. Shutemov


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Anshuman Khandual
On 03/07/2017 12:20 PM, Minchan Kim wrote:
> Hi Anshuman,
> 
> On Mon, Mar 06, 2017 at 03:10:17PM +0530, Anshuman Khandual wrote:
>> On 03/06/2017 07:39 AM, Minchan Kim wrote:
>>> On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
 On 03/02/2017 12:09 PM, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.
 Right.

> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> upcoming patches.
 Right.

> Cc: Vlastimil Babka 
> Cc: Kirill A. Shutemov 
> Signed-off-by: Minchan Kim 
> ---
>  include/linux/rmap.h |  2 +-
>  mm/mlock.c   |  6 ++
>  mm/rmap.c| 16 
>  3 files changed, 7 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b556eef..1b0cd4c 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
>   * called in munlock()/munmap() path to check for other vmas holding
>   * the page mlocked.
>   */
> -int try_to_munlock(struct page *);
> +void try_to_munlock(struct page *);
>  
>  void remove_migration_ptes(struct page *old, struct page *new, bool 
> locked);
>  
> diff --git a/mm/mlock.c b/mm/mlock.c
> index cdbed8a..d34a540 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> *page, bool getpage)
>   */
>  static void __munlock_isolated_page(struct page *page)
>  {
> - int ret = SWAP_AGAIN;
> -
>   /*
>* Optimization: if the page was mapped just once, that's our mapping
>* and we don't need to check all the other vmas.
>*/
>   if (page_mapcount(page) > 1)
> - ret = try_to_munlock(page);
> + try_to_munlock(page);
>  
>   /* Did try_to_unlock() succeed or punt? */
> - if (ret != SWAP_MLOCK)
> + if (!PageMlocked(page))
 Checks if the page is still mlocked or not.

>   count_vm_event(UNEVICTABLE_PGMUNLOCKED);
>  
>   putback_lru_page(page);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 0a48958..61ae694 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
>   * Called from munlock code.  Checks all of the VMAs mapping the page
>   * to make sure nobody else has this page mlocked. The page will be
>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> - *
> - * Return values are:
> - *
> - * SWAP_AGAIN- no vma is holding page mlocked, or,
> - * SWAP_AGAIN- page mapped in mlocked vma -- couldn't acquire mmap 
> sem
> - * SWAP_FAIL - page cannot be located at present
> - * SWAP_MLOCK- page is now mlocked.
>   */
> -int try_to_munlock(struct page *page)
> -{
> - int ret;
>  
> +void try_to_munlock(struct page *page)
> +{
>   struct rmap_walk_control rwc = {
>   .rmap_one = try_to_unmap_one,
>   .arg = (void *)TTU_MUNLOCK,
> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
>   };
>  
>   VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> + VM_BUG_ON_PAGE(PageMlocked(page), page);
 We are calling on the page to see if its mlocked from any of it's
 mapping VMAs. Then it is a possibility that the page is mlocked
 and the above condition is true and we print VM BUG report there.
 The point is if its a valid possibility why we have added the
 above check ?
>>> If I read code properly,  __munlock_isolated_page calls try_to_munlock
>>> always pass the TestClearPageMlocked page to try_to_munlock.
>> Right.
>>
>>> (e.g., munlock_vma_page and __munlock_pagevec) so I thought
>>> try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
>>> returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
>>> IOW, non-PG_mlocked page is precondition for try_to_munlock.
>> Okay, I have missed that part. Nonetheless this is a separate issue,
>> should be part of a different patch ? Not inside these cleanups.
> If that precondition is not true, this patch changes the behavior
> slightly.
> 
> UNEVICTABLE_PGMUNLOCKED count mistmatch compared to old.
> 
> I wanted to catch it up. If you still think it's separate issue,
> I will do. Please tell me. However, I still think it's no problem
> to merge it in this clean up patch.

Got it, its okay. 

Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Anshuman Khandual
On 03/07/2017 12:20 PM, Minchan Kim wrote:
> Hi Anshuman,
> 
> On Mon, Mar 06, 2017 at 03:10:17PM +0530, Anshuman Khandual wrote:
>> On 03/06/2017 07:39 AM, Minchan Kim wrote:
>>> On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
 On 03/02/2017 12:09 PM, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.
 Right.

> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> upcoming patches.
 Right.

> Cc: Vlastimil Babka 
> Cc: Kirill A. Shutemov 
> Signed-off-by: Minchan Kim 
> ---
>  include/linux/rmap.h |  2 +-
>  mm/mlock.c   |  6 ++
>  mm/rmap.c| 16 
>  3 files changed, 7 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b556eef..1b0cd4c 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
>   * called in munlock()/munmap() path to check for other vmas holding
>   * the page mlocked.
>   */
> -int try_to_munlock(struct page *);
> +void try_to_munlock(struct page *);
>  
>  void remove_migration_ptes(struct page *old, struct page *new, bool 
> locked);
>  
> diff --git a/mm/mlock.c b/mm/mlock.c
> index cdbed8a..d34a540 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> *page, bool getpage)
>   */
>  static void __munlock_isolated_page(struct page *page)
>  {
> - int ret = SWAP_AGAIN;
> -
>   /*
>* Optimization: if the page was mapped just once, that's our mapping
>* and we don't need to check all the other vmas.
>*/
>   if (page_mapcount(page) > 1)
> - ret = try_to_munlock(page);
> + try_to_munlock(page);
>  
>   /* Did try_to_unlock() succeed or punt? */
> - if (ret != SWAP_MLOCK)
> + if (!PageMlocked(page))
 Checks if the page is still mlocked or not.

>   count_vm_event(UNEVICTABLE_PGMUNLOCKED);
>  
>   putback_lru_page(page);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 0a48958..61ae694 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
>   * Called from munlock code.  Checks all of the VMAs mapping the page
>   * to make sure nobody else has this page mlocked. The page will be
>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> - *
> - * Return values are:
> - *
> - * SWAP_AGAIN- no vma is holding page mlocked, or,
> - * SWAP_AGAIN- page mapped in mlocked vma -- couldn't acquire mmap 
> sem
> - * SWAP_FAIL - page cannot be located at present
> - * SWAP_MLOCK- page is now mlocked.
>   */
> -int try_to_munlock(struct page *page)
> -{
> - int ret;
>  
> +void try_to_munlock(struct page *page)
> +{
>   struct rmap_walk_control rwc = {
>   .rmap_one = try_to_unmap_one,
>   .arg = (void *)TTU_MUNLOCK,
> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
>   };
>  
>   VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> + VM_BUG_ON_PAGE(PageMlocked(page), page);
 We are calling on the page to see if its mlocked from any of it's
 mapping VMAs. Then it is a possibility that the page is mlocked
 and the above condition is true and we print VM BUG report there.
 The point is if its a valid possibility why we have added the
 above check ?
>>> If I read code properly,  __munlock_isolated_page calls try_to_munlock
>>> always pass the TestClearPageMlocked page to try_to_munlock.
>> Right.
>>
>>> (e.g., munlock_vma_page and __munlock_pagevec) so I thought
>>> try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
>>> returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
>>> IOW, non-PG_mlocked page is precondition for try_to_munlock.
>> Okay, I have missed that part. Nonetheless this is a separate issue,
>> should be part of a different patch ? Not inside these cleanups.
> If that precondition is not true, this patch changes the behavior
> slightly.
> 
> UNEVICTABLE_PGMUNLOCKED count mistmatch compared to old.
> 
> I wanted to catch it up. If you still think it's separate issue,
> I will do. Please tell me. However, I still think it's no problem
> to merge it in this clean up patch.

Got it, its okay. Let this change be part of this patch itself.



Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Minchan Kim
Hi Anshuman,

On Mon, Mar 06, 2017 at 03:10:17PM +0530, Anshuman Khandual wrote:
> On 03/06/2017 07:39 AM, Minchan Kim wrote:
> > On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
> >> On 03/02/2017 12:09 PM, Minchan Kim wrote:
> >>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> >>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> >>> the page if the page is not pte-mapped THP which cannot be
> >>> mlocked, either.
> >>
> >> Right.
> >>
> >>>
> >>> With that, __munlock_isolated_page can use PageMlocked to check
> >>> whether try_to_munlock is successful or not without relying on
> >>> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> >>> upcoming patches.
> >>
> >> Right.
> >>
> >>>
> >>> Cc: Vlastimil Babka 
> >>> Cc: Kirill A. Shutemov 
> >>> Signed-off-by: Minchan Kim 
> >>> ---
> >>>  include/linux/rmap.h |  2 +-
> >>>  mm/mlock.c   |  6 ++
> >>>  mm/rmap.c| 16 
> >>>  3 files changed, 7 insertions(+), 17 deletions(-)
> >>>
> >>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> >>> index b556eef..1b0cd4c 100644
> >>> --- a/include/linux/rmap.h
> >>> +++ b/include/linux/rmap.h
> >>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
> >>>   * called in munlock()/munmap() path to check for other vmas holding
> >>>   * the page mlocked.
> >>>   */
> >>> -int try_to_munlock(struct page *);
> >>> +void try_to_munlock(struct page *);
> >>>  
> >>>  void remove_migration_ptes(struct page *old, struct page *new, bool 
> >>> locked);
> >>>  
> >>> diff --git a/mm/mlock.c b/mm/mlock.c
> >>> index cdbed8a..d34a540 100644
> >>> --- a/mm/mlock.c
> >>> +++ b/mm/mlock.c
> >>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> >>> *page, bool getpage)
> >>>   */
> >>>  static void __munlock_isolated_page(struct page *page)
> >>>  {
> >>> - int ret = SWAP_AGAIN;
> >>> -
> >>>   /*
> >>>* Optimization: if the page was mapped just once, that's our mapping
> >>>* and we don't need to check all the other vmas.
> >>>*/
> >>>   if (page_mapcount(page) > 1)
> >>> - ret = try_to_munlock(page);
> >>> + try_to_munlock(page);
> >>>  
> >>>   /* Did try_to_unlock() succeed or punt? */
> >>> - if (ret != SWAP_MLOCK)
> >>> + if (!PageMlocked(page))
> >>
> >> Checks if the page is still mlocked or not.
> >>
> >>>   count_vm_event(UNEVICTABLE_PGMUNLOCKED);
> >>>  
> >>>   putback_lru_page(page);
> >>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>> index 0a48958..61ae694 100644
> >>> --- a/mm/rmap.c
> >>> +++ b/mm/rmap.c
> >>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
> >>>   * Called from munlock code.  Checks all of the VMAs mapping the page
> >>>   * to make sure nobody else has this page mlocked. The page will be
> >>>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> >>> - *
> >>> - * Return values are:
> >>> - *
> >>> - * SWAP_AGAIN- no vma is holding page mlocked, or,
> >>> - * SWAP_AGAIN- page mapped in mlocked vma -- couldn't acquire mmap 
> >>> sem
> >>> - * SWAP_FAIL - page cannot be located at present
> >>> - * SWAP_MLOCK- page is now mlocked.
> >>>   */
> >>> -int try_to_munlock(struct page *page)
> >>> -{
> >>> - int ret;
> >>>  
> >>> +void try_to_munlock(struct page *page)
> >>> +{
> >>>   struct rmap_walk_control rwc = {
> >>>   .rmap_one = try_to_unmap_one,
> >>>   .arg = (void *)TTU_MUNLOCK,
> >>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
> >>>   };
> >>>  
> >>>   VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> >>> + VM_BUG_ON_PAGE(PageMlocked(page), page);
> >>
> >> We are calling on the page to see if its mlocked from any of it's
> >> mapping VMAs. Then it is a possibility that the page is mlocked
> >> and the above condition is true and we print VM BUG report there.
> >> The point is if its a valid possibility why we have added the
> >> above check ?
> > 
> > If I read code properly,  __munlock_isolated_page calls try_to_munlock
> > always pass the TestClearPageMlocked page to try_to_munlock.
> 
> Right.
> 
> > (e.g., munlock_vma_page and __munlock_pagevec) so I thought
> > try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
> > returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
> > IOW, non-PG_mlocked page is precondition for try_to_munlock.
> 
> Okay, I have missed that part. Nonetheless this is a separate issue,
> should be part of a different patch ? Not inside these cleanups.

If that precondition is not true, this patch changes the behavior
slightly.

UNEVICTABLE_PGMUNLOCKED count mistmatch compared to old.

I wanted to catch it up. If you still think it's separate issue,
I will do. Please tell me. However, I still think it's no problem
to merge it in this clean up patch.

Thanks.


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-07 Thread Minchan Kim
Hi Anshuman,

On Mon, Mar 06, 2017 at 03:10:17PM +0530, Anshuman Khandual wrote:
> On 03/06/2017 07:39 AM, Minchan Kim wrote:
> > On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
> >> On 03/02/2017 12:09 PM, Minchan Kim wrote:
> >>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> >>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> >>> the page if the page is not pte-mapped THP which cannot be
> >>> mlocked, either.
> >>
> >> Right.
> >>
> >>>
> >>> With that, __munlock_isolated_page can use PageMlocked to check
> >>> whether try_to_munlock is successful or not without relying on
> >>> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> >>> upcoming patches.
> >>
> >> Right.
> >>
> >>>
> >>> Cc: Vlastimil Babka 
> >>> Cc: Kirill A. Shutemov 
> >>> Signed-off-by: Minchan Kim 
> >>> ---
> >>>  include/linux/rmap.h |  2 +-
> >>>  mm/mlock.c   |  6 ++
> >>>  mm/rmap.c| 16 
> >>>  3 files changed, 7 insertions(+), 17 deletions(-)
> >>>
> >>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> >>> index b556eef..1b0cd4c 100644
> >>> --- a/include/linux/rmap.h
> >>> +++ b/include/linux/rmap.h
> >>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
> >>>   * called in munlock()/munmap() path to check for other vmas holding
> >>>   * the page mlocked.
> >>>   */
> >>> -int try_to_munlock(struct page *);
> >>> +void try_to_munlock(struct page *);
> >>>  
> >>>  void remove_migration_ptes(struct page *old, struct page *new, bool 
> >>> locked);
> >>>  
> >>> diff --git a/mm/mlock.c b/mm/mlock.c
> >>> index cdbed8a..d34a540 100644
> >>> --- a/mm/mlock.c
> >>> +++ b/mm/mlock.c
> >>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> >>> *page, bool getpage)
> >>>   */
> >>>  static void __munlock_isolated_page(struct page *page)
> >>>  {
> >>> - int ret = SWAP_AGAIN;
> >>> -
> >>>   /*
> >>>* Optimization: if the page was mapped just once, that's our mapping
> >>>* and we don't need to check all the other vmas.
> >>>*/
> >>>   if (page_mapcount(page) > 1)
> >>> - ret = try_to_munlock(page);
> >>> + try_to_munlock(page);
> >>>  
> >>>   /* Did try_to_unlock() succeed or punt? */
> >>> - if (ret != SWAP_MLOCK)
> >>> + if (!PageMlocked(page))
> >>
> >> Checks if the page is still mlocked or not.
> >>
> >>>   count_vm_event(UNEVICTABLE_PGMUNLOCKED);
> >>>  
> >>>   putback_lru_page(page);
> >>> diff --git a/mm/rmap.c b/mm/rmap.c
> >>> index 0a48958..61ae694 100644
> >>> --- a/mm/rmap.c
> >>> +++ b/mm/rmap.c
> >>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
> >>>   * Called from munlock code.  Checks all of the VMAs mapping the page
> >>>   * to make sure nobody else has this page mlocked. The page will be
> >>>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> >>> - *
> >>> - * Return values are:
> >>> - *
> >>> - * SWAP_AGAIN- no vma is holding page mlocked, or,
> >>> - * SWAP_AGAIN- page mapped in mlocked vma -- couldn't acquire mmap 
> >>> sem
> >>> - * SWAP_FAIL - page cannot be located at present
> >>> - * SWAP_MLOCK- page is now mlocked.
> >>>   */
> >>> -int try_to_munlock(struct page *page)
> >>> -{
> >>> - int ret;
> >>>  
> >>> +void try_to_munlock(struct page *page)
> >>> +{
> >>>   struct rmap_walk_control rwc = {
> >>>   .rmap_one = try_to_unmap_one,
> >>>   .arg = (void *)TTU_MUNLOCK,
> >>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
> >>>   };
> >>>  
> >>>   VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> >>> + VM_BUG_ON_PAGE(PageMlocked(page), page);
> >>
> >> We are calling on the page to see if its mlocked from any of it's
> >> mapping VMAs. Then it is a possibility that the page is mlocked
> >> and the above condition is true and we print VM BUG report there.
> >> The point is if its a valid possibility why we have added the
> >> above check ?
> > 
> > If I read code properly,  __munlock_isolated_page calls try_to_munlock
> > always pass the TestClearPageMlocked page to try_to_munlock.
> 
> Right.
> 
> > (e.g., munlock_vma_page and __munlock_pagevec) so I thought
> > try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
> > returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
> > IOW, non-PG_mlocked page is precondition for try_to_munlock.
> 
> Okay, I have missed that part. Nonetheless this is a separate issue,
> should be part of a different patch ? Not inside these cleanups.

If that precondition is not true, this patch changes the behavior
slightly.

UNEVICTABLE_PGMUNLOCKED count mistmatch compared to old.

I wanted to catch it up. If you still think it's separate issue,
I will do. Please tell me. However, I still think it's no problem
to merge it in this clean up patch.

Thanks.


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-06 Thread Anshuman Khandual
On 03/06/2017 07:39 AM, Minchan Kim wrote:
> On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
>> On 03/02/2017 12:09 PM, Minchan Kim wrote:
>>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
>>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
>>> the page if the page is not pte-mapped THP which cannot be
>>> mlocked, either.
>>
>> Right.
>>
>>>
>>> With that, __munlock_isolated_page can use PageMlocked to check
>>> whether try_to_munlock is successful or not without relying on
>>> try_to_munlock's retval. It helps to make ttu/ttuo simple with
>>> upcoming patches.
>>
>> Right.
>>
>>>
>>> Cc: Vlastimil Babka 
>>> Cc: Kirill A. Shutemov 
>>> Signed-off-by: Minchan Kim 
>>> ---
>>>  include/linux/rmap.h |  2 +-
>>>  mm/mlock.c   |  6 ++
>>>  mm/rmap.c| 16 
>>>  3 files changed, 7 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>> index b556eef..1b0cd4c 100644
>>> --- a/include/linux/rmap.h
>>> +++ b/include/linux/rmap.h
>>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
>>>   * called in munlock()/munmap() path to check for other vmas holding
>>>   * the page mlocked.
>>>   */
>>> -int try_to_munlock(struct page *);
>>> +void try_to_munlock(struct page *);
>>>  
>>>  void remove_migration_ptes(struct page *old, struct page *new, bool 
>>> locked);
>>>  
>>> diff --git a/mm/mlock.c b/mm/mlock.c
>>> index cdbed8a..d34a540 100644
>>> --- a/mm/mlock.c
>>> +++ b/mm/mlock.c
>>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
>>> *page, bool getpage)
>>>   */
>>>  static void __munlock_isolated_page(struct page *page)
>>>  {
>>> -   int ret = SWAP_AGAIN;
>>> -
>>> /*
>>>  * Optimization: if the page was mapped just once, that's our mapping
>>>  * and we don't need to check all the other vmas.
>>>  */
>>> if (page_mapcount(page) > 1)
>>> -   ret = try_to_munlock(page);
>>> +   try_to_munlock(page);
>>>  
>>> /* Did try_to_unlock() succeed or punt? */
>>> -   if (ret != SWAP_MLOCK)
>>> +   if (!PageMlocked(page))
>>
>> Checks if the page is still mlocked or not.
>>
>>> count_vm_event(UNEVICTABLE_PGMUNLOCKED);
>>>  
>>> putback_lru_page(page);
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 0a48958..61ae694 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
>>>   * Called from munlock code.  Checks all of the VMAs mapping the page
>>>   * to make sure nobody else has this page mlocked. The page will be
>>>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
>>> - *
>>> - * Return values are:
>>> - *
>>> - * SWAP_AGAIN  - no vma is holding page mlocked, or,
>>> - * SWAP_AGAIN  - page mapped in mlocked vma -- couldn't acquire mmap 
>>> sem
>>> - * SWAP_FAIL   - page cannot be located at present
>>> - * SWAP_MLOCK  - page is now mlocked.
>>>   */
>>> -int try_to_munlock(struct page *page)
>>> -{
>>> -   int ret;
>>>  
>>> +void try_to_munlock(struct page *page)
>>> +{
>>> struct rmap_walk_control rwc = {
>>> .rmap_one = try_to_unmap_one,
>>> .arg = (void *)TTU_MUNLOCK,
>>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
>>> };
>>>  
>>> VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
>>> +   VM_BUG_ON_PAGE(PageMlocked(page), page);
>>
>> We are calling on the page to see if its mlocked from any of it's
>> mapping VMAs. Then it is a possibility that the page is mlocked
>> and the above condition is true and we print VM BUG report there.
>> The point is if its a valid possibility why we have added the
>> above check ?
> 
> If I read code properly,  __munlock_isolated_page calls try_to_munlock
> always pass the TestClearPageMlocked page to try_to_munlock.

Right.

> (e.g., munlock_vma_page and __munlock_pagevec) so I thought
> try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
> returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
> IOW, non-PG_mlocked page is precondition for try_to_munlock.

Okay, I have missed that part. Nonetheless this is a separate issue,
should be part of a different patch ? Not inside these cleanups.



Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-06 Thread Anshuman Khandual
On 03/06/2017 07:39 AM, Minchan Kim wrote:
> On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
>> On 03/02/2017 12:09 PM, Minchan Kim wrote:
>>> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
>>> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
>>> the page if the page is not pte-mapped THP which cannot be
>>> mlocked, either.
>>
>> Right.
>>
>>>
>>> With that, __munlock_isolated_page can use PageMlocked to check
>>> whether try_to_munlock is successful or not without relying on
>>> try_to_munlock's retval. It helps to make ttu/ttuo simple with
>>> upcoming patches.
>>
>> Right.
>>
>>>
>>> Cc: Vlastimil Babka 
>>> Cc: Kirill A. Shutemov 
>>> Signed-off-by: Minchan Kim 
>>> ---
>>>  include/linux/rmap.h |  2 +-
>>>  mm/mlock.c   |  6 ++
>>>  mm/rmap.c| 16 
>>>  3 files changed, 7 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
>>> index b556eef..1b0cd4c 100644
>>> --- a/include/linux/rmap.h
>>> +++ b/include/linux/rmap.h
>>> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
>>>   * called in munlock()/munmap() path to check for other vmas holding
>>>   * the page mlocked.
>>>   */
>>> -int try_to_munlock(struct page *);
>>> +void try_to_munlock(struct page *);
>>>  
>>>  void remove_migration_ptes(struct page *old, struct page *new, bool 
>>> locked);
>>>  
>>> diff --git a/mm/mlock.c b/mm/mlock.c
>>> index cdbed8a..d34a540 100644
>>> --- a/mm/mlock.c
>>> +++ b/mm/mlock.c
>>> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
>>> *page, bool getpage)
>>>   */
>>>  static void __munlock_isolated_page(struct page *page)
>>>  {
>>> -   int ret = SWAP_AGAIN;
>>> -
>>> /*
>>>  * Optimization: if the page was mapped just once, that's our mapping
>>>  * and we don't need to check all the other vmas.
>>>  */
>>> if (page_mapcount(page) > 1)
>>> -   ret = try_to_munlock(page);
>>> +   try_to_munlock(page);
>>>  
>>> /* Did try_to_unlock() succeed or punt? */
>>> -   if (ret != SWAP_MLOCK)
>>> +   if (!PageMlocked(page))
>>
>> Checks if the page is still mlocked or not.
>>
>>> count_vm_event(UNEVICTABLE_PGMUNLOCKED);
>>>  
>>> putback_lru_page(page);
>>> diff --git a/mm/rmap.c b/mm/rmap.c
>>> index 0a48958..61ae694 100644
>>> --- a/mm/rmap.c
>>> +++ b/mm/rmap.c
>>> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
>>>   * Called from munlock code.  Checks all of the VMAs mapping the page
>>>   * to make sure nobody else has this page mlocked. The page will be
>>>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
>>> - *
>>> - * Return values are:
>>> - *
>>> - * SWAP_AGAIN  - no vma is holding page mlocked, or,
>>> - * SWAP_AGAIN  - page mapped in mlocked vma -- couldn't acquire mmap 
>>> sem
>>> - * SWAP_FAIL   - page cannot be located at present
>>> - * SWAP_MLOCK  - page is now mlocked.
>>>   */
>>> -int try_to_munlock(struct page *page)
>>> -{
>>> -   int ret;
>>>  
>>> +void try_to_munlock(struct page *page)
>>> +{
>>> struct rmap_walk_control rwc = {
>>> .rmap_one = try_to_unmap_one,
>>> .arg = (void *)TTU_MUNLOCK,
>>> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
>>> };
>>>  
>>> VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
>>> +   VM_BUG_ON_PAGE(PageMlocked(page), page);
>>
>> We are calling on the page to see if its mlocked from any of it's
>> mapping VMAs. Then it is a possibility that the page is mlocked
>> and the above condition is true and we print VM BUG report there.
>> The point is if its a valid possibility why we have added the
>> above check ?
> 
> If I read code properly,  __munlock_isolated_page calls try_to_munlock
> always pass the TestClearPageMlocked page to try_to_munlock.

Right.

> (e.g., munlock_vma_page and __munlock_pagevec) so I thought
> try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
> returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
> IOW, non-PG_mlocked page is precondition for try_to_munlock.

Okay, I have missed that part. Nonetheless this is a separate issue,
should be part of a different patch ? Not inside these cleanups.



Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-05 Thread Minchan Kim
On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
> On 03/02/2017 12:09 PM, Minchan Kim wrote:
> > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> > the page if the page is not pte-mapped THP which cannot be
> > mlocked, either.
> 
> Right.
> 
> > 
> > With that, __munlock_isolated_page can use PageMlocked to check
> > whether try_to_munlock is successful or not without relying on
> > try_to_munlock's retval. It helps to make ttu/ttuo simple with
> > upcoming patches.
> 
> Right.
> 
> > 
> > Cc: Vlastimil Babka 
> > Cc: Kirill A. Shutemov 
> > Signed-off-by: Minchan Kim 
> > ---
> >  include/linux/rmap.h |  2 +-
> >  mm/mlock.c   |  6 ++
> >  mm/rmap.c| 16 
> >  3 files changed, 7 insertions(+), 17 deletions(-)
> > 
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index b556eef..1b0cd4c 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
> >   * called in munlock()/munmap() path to check for other vmas holding
> >   * the page mlocked.
> >   */
> > -int try_to_munlock(struct page *);
> > +void try_to_munlock(struct page *);
> >  
> >  void remove_migration_ptes(struct page *old, struct page *new, bool 
> > locked);
> >  
> > diff --git a/mm/mlock.c b/mm/mlock.c
> > index cdbed8a..d34a540 100644
> > --- a/mm/mlock.c
> > +++ b/mm/mlock.c
> > @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> > *page, bool getpage)
> >   */
> >  static void __munlock_isolated_page(struct page *page)
> >  {
> > -   int ret = SWAP_AGAIN;
> > -
> > /*
> >  * Optimization: if the page was mapped just once, that's our mapping
> >  * and we don't need to check all the other vmas.
> >  */
> > if (page_mapcount(page) > 1)
> > -   ret = try_to_munlock(page);
> > +   try_to_munlock(page);
> >  
> > /* Did try_to_unlock() succeed or punt? */
> > -   if (ret != SWAP_MLOCK)
> > +   if (!PageMlocked(page))
> 
> Checks if the page is still mlocked or not.
> 
> > count_vm_event(UNEVICTABLE_PGMUNLOCKED);
> >  
> > putback_lru_page(page);
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 0a48958..61ae694 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
> >   * Called from munlock code.  Checks all of the VMAs mapping the page
> >   * to make sure nobody else has this page mlocked. The page will be
> >   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> > - *
> > - * Return values are:
> > - *
> > - * SWAP_AGAIN  - no vma is holding page mlocked, or,
> > - * SWAP_AGAIN  - page mapped in mlocked vma -- couldn't acquire mmap 
> > sem
> > - * SWAP_FAIL   - page cannot be located at present
> > - * SWAP_MLOCK  - page is now mlocked.
> >   */
> > -int try_to_munlock(struct page *page)
> > -{
> > -   int ret;
> >  
> > +void try_to_munlock(struct page *page)
> > +{
> > struct rmap_walk_control rwc = {
> > .rmap_one = try_to_unmap_one,
> > .arg = (void *)TTU_MUNLOCK,
> > @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
> > };
> >  
> > VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> > +   VM_BUG_ON_PAGE(PageMlocked(page), page);
> 
> We are calling on the page to see if its mlocked from any of it's
> mapping VMAs. Then it is a possibility that the page is mlocked
> and the above condition is true and we print VM BUG report there.
> The point is if its a valid possibility why we have added the
> above check ?

If I read code properly,  __munlock_isolated_page calls try_to_munlock
always pass the TestClearPageMlocked page to try_to_munlock.
(e.g., munlock_vma_page and __munlock_pagevec) so I thought
try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
IOW, non-PG_mlocked page is precondition for try_to_munlock.


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-05 Thread Minchan Kim
On Fri, Mar 03, 2017 at 05:13:54PM +0530, Anshuman Khandual wrote:
> On 03/02/2017 12:09 PM, Minchan Kim wrote:
> > try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> > the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> > the page if the page is not pte-mapped THP which cannot be
> > mlocked, either.
> 
> Right.
> 
> > 
> > With that, __munlock_isolated_page can use PageMlocked to check
> > whether try_to_munlock is successful or not without relying on
> > try_to_munlock's retval. It helps to make ttu/ttuo simple with
> > upcoming patches.
> 
> Right.
> 
> > 
> > Cc: Vlastimil Babka 
> > Cc: Kirill A. Shutemov 
> > Signed-off-by: Minchan Kim 
> > ---
> >  include/linux/rmap.h |  2 +-
> >  mm/mlock.c   |  6 ++
> >  mm/rmap.c| 16 
> >  3 files changed, 7 insertions(+), 17 deletions(-)
> > 
> > diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> > index b556eef..1b0cd4c 100644
> > --- a/include/linux/rmap.h
> > +++ b/include/linux/rmap.h
> > @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
> >   * called in munlock()/munmap() path to check for other vmas holding
> >   * the page mlocked.
> >   */
> > -int try_to_munlock(struct page *);
> > +void try_to_munlock(struct page *);
> >  
> >  void remove_migration_ptes(struct page *old, struct page *new, bool 
> > locked);
> >  
> > diff --git a/mm/mlock.c b/mm/mlock.c
> > index cdbed8a..d34a540 100644
> > --- a/mm/mlock.c
> > +++ b/mm/mlock.c
> > @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> > *page, bool getpage)
> >   */
> >  static void __munlock_isolated_page(struct page *page)
> >  {
> > -   int ret = SWAP_AGAIN;
> > -
> > /*
> >  * Optimization: if the page was mapped just once, that's our mapping
> >  * and we don't need to check all the other vmas.
> >  */
> > if (page_mapcount(page) > 1)
> > -   ret = try_to_munlock(page);
> > +   try_to_munlock(page);
> >  
> > /* Did try_to_unlock() succeed or punt? */
> > -   if (ret != SWAP_MLOCK)
> > +   if (!PageMlocked(page))
> 
> Checks if the page is still mlocked or not.
> 
> > count_vm_event(UNEVICTABLE_PGMUNLOCKED);
> >  
> > putback_lru_page(page);
> > diff --git a/mm/rmap.c b/mm/rmap.c
> > index 0a48958..61ae694 100644
> > --- a/mm/rmap.c
> > +++ b/mm/rmap.c
> > @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
> >   * Called from munlock code.  Checks all of the VMAs mapping the page
> >   * to make sure nobody else has this page mlocked. The page will be
> >   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> > - *
> > - * Return values are:
> > - *
> > - * SWAP_AGAIN  - no vma is holding page mlocked, or,
> > - * SWAP_AGAIN  - page mapped in mlocked vma -- couldn't acquire mmap 
> > sem
> > - * SWAP_FAIL   - page cannot be located at present
> > - * SWAP_MLOCK  - page is now mlocked.
> >   */
> > -int try_to_munlock(struct page *page)
> > -{
> > -   int ret;
> >  
> > +void try_to_munlock(struct page *page)
> > +{
> > struct rmap_walk_control rwc = {
> > .rmap_one = try_to_unmap_one,
> > .arg = (void *)TTU_MUNLOCK,
> > @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
> > };
> >  
> > VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> > +   VM_BUG_ON_PAGE(PageMlocked(page), page);
> 
> We are calling on the page to see if its mlocked from any of it's
> mapping VMAs. Then it is a possibility that the page is mlocked
> and the above condition is true and we print VM BUG report there.
> The point is if its a valid possibility why we have added the
> above check ?

If I read code properly,  __munlock_isolated_page calls try_to_munlock
always pass the TestClearPageMlocked page to try_to_munlock.
(e.g., munlock_vma_page and __munlock_pagevec) so I thought
try_to_munlock should be called non-PG_mlocked page and try_to_unmap_one
returns PG_mlocked page once it found a VM_LOCKED VMA for a page.
IOW, non-PG_mlocked page is precondition for try_to_munlock.


Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-03 Thread Anshuman Khandual
On 03/02/2017 12:09 PM, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.

Right.

> 
> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> upcoming patches.

Right.

> 
> Cc: Vlastimil Babka 
> Cc: Kirill A. Shutemov 
> Signed-off-by: Minchan Kim 
> ---
>  include/linux/rmap.h |  2 +-
>  mm/mlock.c   |  6 ++
>  mm/rmap.c| 16 
>  3 files changed, 7 insertions(+), 17 deletions(-)
> 
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b556eef..1b0cd4c 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
>   * called in munlock()/munmap() path to check for other vmas holding
>   * the page mlocked.
>   */
> -int try_to_munlock(struct page *);
> +void try_to_munlock(struct page *);
>  
>  void remove_migration_ptes(struct page *old, struct page *new, bool locked);
>  
> diff --git a/mm/mlock.c b/mm/mlock.c
> index cdbed8a..d34a540 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> *page, bool getpage)
>   */
>  static void __munlock_isolated_page(struct page *page)
>  {
> - int ret = SWAP_AGAIN;
> -
>   /*
>* Optimization: if the page was mapped just once, that's our mapping
>* and we don't need to check all the other vmas.
>*/
>   if (page_mapcount(page) > 1)
> - ret = try_to_munlock(page);
> + try_to_munlock(page);
>  
>   /* Did try_to_unlock() succeed or punt? */
> - if (ret != SWAP_MLOCK)
> + if (!PageMlocked(page))

Checks if the page is still mlocked or not.

>   count_vm_event(UNEVICTABLE_PGMUNLOCKED);
>  
>   putback_lru_page(page);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 0a48958..61ae694 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
>   * Called from munlock code.  Checks all of the VMAs mapping the page
>   * to make sure nobody else has this page mlocked. The page will be
>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> - *
> - * Return values are:
> - *
> - * SWAP_AGAIN- no vma is holding page mlocked, or,
> - * SWAP_AGAIN- page mapped in mlocked vma -- couldn't acquire mmap 
> sem
> - * SWAP_FAIL - page cannot be located at present
> - * SWAP_MLOCK- page is now mlocked.
>   */
> -int try_to_munlock(struct page *page)
> -{
> - int ret;
>  
> +void try_to_munlock(struct page *page)
> +{
>   struct rmap_walk_control rwc = {
>   .rmap_one = try_to_unmap_one,
>   .arg = (void *)TTU_MUNLOCK,
> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
>   };
>  
>   VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> + VM_BUG_ON_PAGE(PageMlocked(page), page);

We are calling on the page to see if its mlocked from any of it's
mapping VMAs. Then it is a possibility that the page is mlocked
and the above condition is true and we print VM BUG report there.
The point is if its a valid possibility why we have added the
above check ?



Re: [RFC 05/11] mm: make the try_to_munlock void function

2017-03-03 Thread Anshuman Khandual
On 03/02/2017 12:09 PM, Minchan Kim wrote:
> try_to_munlock returns SWAP_MLOCK if the one of VMAs mapped
> the page has VM_LOCKED flag. In that time, VM set PG_mlocked to
> the page if the page is not pte-mapped THP which cannot be
> mlocked, either.

Right.

> 
> With that, __munlock_isolated_page can use PageMlocked to check
> whether try_to_munlock is successful or not without relying on
> try_to_munlock's retval. It helps to make ttu/ttuo simple with
> upcoming patches.

Right.

> 
> Cc: Vlastimil Babka 
> Cc: Kirill A. Shutemov 
> Signed-off-by: Minchan Kim 
> ---
>  include/linux/rmap.h |  2 +-
>  mm/mlock.c   |  6 ++
>  mm/rmap.c| 16 
>  3 files changed, 7 insertions(+), 17 deletions(-)
> 
> diff --git a/include/linux/rmap.h b/include/linux/rmap.h
> index b556eef..1b0cd4c 100644
> --- a/include/linux/rmap.h
> +++ b/include/linux/rmap.h
> @@ -235,7 +235,7 @@ int page_mkclean(struct page *);
>   * called in munlock()/munmap() path to check for other vmas holding
>   * the page mlocked.
>   */
> -int try_to_munlock(struct page *);
> +void try_to_munlock(struct page *);
>  
>  void remove_migration_ptes(struct page *old, struct page *new, bool locked);
>  
> diff --git a/mm/mlock.c b/mm/mlock.c
> index cdbed8a..d34a540 100644
> --- a/mm/mlock.c
> +++ b/mm/mlock.c
> @@ -122,17 +122,15 @@ static bool __munlock_isolate_lru_page(struct page 
> *page, bool getpage)
>   */
>  static void __munlock_isolated_page(struct page *page)
>  {
> - int ret = SWAP_AGAIN;
> -
>   /*
>* Optimization: if the page was mapped just once, that's our mapping
>* and we don't need to check all the other vmas.
>*/
>   if (page_mapcount(page) > 1)
> - ret = try_to_munlock(page);
> + try_to_munlock(page);
>  
>   /* Did try_to_unlock() succeed or punt? */
> - if (ret != SWAP_MLOCK)
> + if (!PageMlocked(page))

Checks if the page is still mlocked or not.

>   count_vm_event(UNEVICTABLE_PGMUNLOCKED);
>  
>   putback_lru_page(page);
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 0a48958..61ae694 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1540,18 +1540,10 @@ static int page_not_mapped(struct page *page)
>   * Called from munlock code.  Checks all of the VMAs mapping the page
>   * to make sure nobody else has this page mlocked. The page will be
>   * returned with PG_mlocked cleared if no other vmas have it mlocked.
> - *
> - * Return values are:
> - *
> - * SWAP_AGAIN- no vma is holding page mlocked, or,
> - * SWAP_AGAIN- page mapped in mlocked vma -- couldn't acquire mmap 
> sem
> - * SWAP_FAIL - page cannot be located at present
> - * SWAP_MLOCK- page is now mlocked.
>   */
> -int try_to_munlock(struct page *page)
> -{
> - int ret;
>  
> +void try_to_munlock(struct page *page)
> +{
>   struct rmap_walk_control rwc = {
>   .rmap_one = try_to_unmap_one,
>   .arg = (void *)TTU_MUNLOCK,
> @@ -1561,9 +1553,9 @@ int try_to_munlock(struct page *page)
>   };
>  
>   VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
> + VM_BUG_ON_PAGE(PageMlocked(page), page);

We are calling on the page to see if its mlocked from any of it's
mapping VMAs. Then it is a possibility that the page is mlocked
and the above condition is true and we print VM BUG report there.
The point is if its a valid possibility why we have added the
above check ?