On Wed 19-08-20 16:05:49, Matthew Wilcox (Oracle) wrote:
> The comment shows that the reason for using find_get_entries() is now
> stale; find_get_pages() will not return 0 if it hits a consecutive run
> of swap entries, and I don't believe it has since 2011.  pagevec_lookup()
> is a simpler function to use than find_get_pages(), so use it instead.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <wi...@infradead.org>

This looks good to me. You can add:

Reviewed-by: Jan Kara <j...@suse.cz>

                                                                Honza

> ---
>  mm/shmem.c | 11 +----------
>  1 file changed, 1 insertion(+), 10 deletions(-)
> 
> diff --git a/mm/shmem.c b/mm/shmem.c
> index 271548ca20f3..a7bbc4ed9677 100644
> --- a/mm/shmem.c
> +++ b/mm/shmem.c
> @@ -840,7 +840,6 @@ unsigned long shmem_swap_usage(struct vm_area_struct *vma)
>  void shmem_unlock_mapping(struct address_space *mapping)
>  {
>       struct pagevec pvec;
> -     pgoff_t indices[PAGEVEC_SIZE];
>       pgoff_t index = 0;
>  
>       pagevec_init(&pvec);
> @@ -848,16 +847,8 @@ void shmem_unlock_mapping(struct address_space *mapping)
>        * Minor point, but we might as well stop if someone else SHM_LOCKs it.
>        */
>       while (!mapping_unevictable(mapping)) {
> -             /*
> -              * Avoid pagevec_lookup(): find_get_pages() returns 0 as if it
> -              * has finished, if it hits a row of PAGEVEC_SIZE swap entries.
> -              */
> -             pvec.nr = find_get_entries(mapping, index,
> -                                        PAGEVEC_SIZE, pvec.pages, indices);
> -             if (!pvec.nr)
> +             if (!pagevec_lookup(&pvec, mapping, &index))
>                       break;
> -             index = indices[pvec.nr - 1] + 1;
> -             pagevec_remove_exceptionals(&pvec);
>               check_move_unevictable_pages(&pvec);
>               pagevec_release(&pvec);
>               cond_resched();
> -- 
> 2.28.0
> 
-- 
Jan Kara <j...@suse.com>
SUSE Labs, CR

Reply via email to