Hello

shrink_page_list() returns , number of pages reclaimed, when pages is 
unevictable it returns VM_BUG_ON_PAGE(PageLRU(page) || 
PageUnevicatble(page),page);

We can add the unevictable pages in reclaim list in shrink_page_list(), return 
total number of reclaim pages including unevictable pages, let the caller 
handle unevictable pages.

I think the problem is shrink_page_list is awkard. If page is unevictable it 
goto activate_locked->keep_locked->keep lables, keep lable list_add the 
unevictable pages and throw the VM_BUG instead of passing it to caller while it 
relies on caller for non-reclaimed-non-unevictable  page's putback.
I think we can make it consistent so that shrink_page_list could return 
non-reclaimed pages via page_list and caller can handle it. As an advance, it 
could try to migrate mlocked pages without retrial.


Below is the issue of CMA_ALLOC of large size buffer : (Kernel version - 
4.14.65 (On Android pie [ARM])).

[   24.718792] page dumped because: VM_BUG_ON_PAGE(PageLRU(page) || 
PageUnevictable(page))
[   24.726949] page->mem_cgroup:bd008c00
[   24.730693] ------------[ cut here ]------------
[   24.735304] kernel BUG at mm/vmscan.c:1350!
[   24.739478] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP ARM


Below is the patch which solved this issue :

diff --git a/mm/vmscan.c b/mm/vmscan.c
index be56e2e..12ac353 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -998,7 +998,7 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
                sc->nr_scanned++;
 
                if (unlikely(!page_evictable(page)))
-                       goto activate_locked;
+                      goto cull_mlocked;
 
                if (!sc->may_unmap && page_mapped(page))
                        goto keep_locked;
@@ -1331,7 +1331,12 @@ static unsigned long shrink_page_list(struct list_head 
*page_list,
                } else
                        list_add(&page->lru, &free_pages);
                continue;
-
+cull_mlocked:
+                if (PageSwapCache(page))
+                        try_to_free_swap(page);
+                unlock_page(page);
+                list_add(&page->lru, &ret_pages);
+                continue;
 activate_locked:
                /* Not a candidate for swapping, so reclaim swap space. */
                if (PageSwapCache(page) && (mem_cgroup_swap_full(page) ||




It fixes the below issue.

1. Large size buffer allocation using cma_alloc successful with unevictable 
pages.

cma_alloc of current kernel will fail due to unevictable page

Please let me know if anything i am missing.

Regards,
Pankaj
  
From: Vlastimil Babka <vba...@suse.cz>
Sent: 18 March 2019 14:12:50
To: Pankaj Suryawanshi; Kirill Tkhai; Michal Hocko; aneesh.ku...@linux.ibm.com
Cc: linux-kernel@vger.kernel.org; minc...@kernel.org; linux...@kvack.org; 
khand...@linux.vnet.ibm.com; hillf...@alibaba-inc.com
Subject: Re: [External] Re: vmscan: Reclaim unevictable pages
  

On 3/15/19 11:11 AM, Pankaj Suryawanshi wrote:
> 
> [ cc Aneesh kumar, Anshuman, Hillf, Vlastimil]

Can you send a proper patch with changelog explaining the change? I
don't know the context of this thread.

> From: Pankaj Suryawanshi
> Sent: 15 March 2019 11:35:05
> To: Kirill Tkhai; Michal Hocko
> Cc: linux-kernel@vger.kernel.org; minc...@kernel.org; linux...@kvack.org
> Subject: Re: Re: [External] Re: vmscan: Reclaim unevictable pages
> 
> 
> 
> [ cc linux-mm ]
> 
> 
> From: Pankaj Suryawanshi
> Sent: 14 March 2019 19:14:40
> To: Kirill Tkhai; Michal Hocko
> Cc: linux-kernel@vger.kernel.org; minc...@kernel.org
> Subject: Re: Re: [External] Re: vmscan: Reclaim unevictable pages
> 
> 
> 
> Hello ,
> 
> Please ignore the curly braces, they are just for debugging.
> 
> Below is the updated patch.
> 
> 
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index be56e2e..12ac353 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -998,7 +998,7 @@ static unsigned long shrink_page_list(struct list_head 
> *page_list,
>                 sc->nr_scanned++;
> 
>                 if (unlikely(!page_evictable(page)))
> -                       goto activate_locked;
> +                      goto cull_mlocked;
> 
>                 if (!sc->may_unmap && page_mapped(page))
>                         goto keep_locked;
> @@ -1331,7 +1331,12 @@ static unsigned long shrink_page_list(struct list_head 
> *page_list,
>                 } else
>                         list_add(&page->lru, &free_pages);
>                 continue;
> -
> +cull_mlocked:
> +                if (PageSwapCache(page))
> +                        try_to_free_swap(page);
> +                unlock_page(page);
> +                list_add(&page->lru, &ret_pages);
> +                continue;
>  activate_locked:
>                 /* Not a candidate for swapping, so reclaim swap space. */
>                 if (PageSwapCache(page) && (mem_cgroup_swap_full(page) ||
> 
> 
> 
> Regards,
> Pankaj
> 
> 
> From: Kirill Tkhai <ktk...@virtuozzo.com>
> Sent: 14 March 2019 14:55:34
> To: Pankaj Suryawanshi; Michal Hocko
> Cc: linux-kernel@vger.kernel.org; minc...@kernel.org
> Subject: Re: Re: [External] Re: vmscan: Reclaim unevictable pages
> 
> 
> On 14.03.2019 11:52, Pankaj Suryawanshi wrote:
>>
>> I am using kernel version 4.14.65 (on Android pie [ARM]).
>>
>> No additional patches applied on top of vanilla.(Core MM).
>>
>> If  I change in the vmscan.c as below patch, it will work.
> 
> Sorry, but 4.14.65 does not have braces around trylock_page(),
> like in your patch below.
> 
> See      
> https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/mm/vmscan.c?h=v4.14.65
> 
> [...]
> 
>>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>>> index be56e2e..2e51edc 100644
>>> --- a/mm/vmscan.c
>>> +++ b/mm/vmscan.c
>>> @@ -990,15 +990,17 @@ static unsigned long shrink_page_list(struct 
>>> list_head *page_list,
>>>                  page = lru_to_page(page_list);
>>>                  list_del(&page->lru);
>>>
>>>                 if (!trylock_page(page)) {
>>>                          goto keep;
>>>                 }
> 
> *************************************************************************************************************************************************************
>  eInfochips Business Disclaimer: This e-mail message and all attachments 
> transmitted with it are intended  solely for the use of the addressee and may 
> contain legally privileged and confidential information. If the reader of 
> this message is not the intended recipient, or an employee or agent 
> responsible for delivering this message to the intended recipient, you  are 
> hereby notified that any dissemination, distribution, copying, or other use 
> of this message or its attachments is strictly prohibited. If you have 
> received this message in error, please notify the sender immediately by 
> replying to this message and please  delete it from your computer. Any views 
> expressed in this message are those of the individual sender unless otherwise 
> stated. Company has taken enough precautions to prevent the spread of 
> viruses. However the company accepts no liability for any damage caused  by 
> any virus transmitted by this email. 
> *************************************************************************************************************************************************************
> 

    

Reply via email to