On January 26, 2017 2:26 AM Kirill A. Shutemov wrote: 
> 
> For consistency, it worth converting all page_check_address() to
> page_vma_mapped_walk(), so we could drop the former.
> 
> Signed-off-by: Kirill A. Shutemov <kirill.shute...@linux.intel.com>
> ---
Acked-by: Hillf Danton <hillf...@alibaba-inc.com>

>  mm/page_vma_mapped.c | 30 ++++++++++++++++++++++++++++++
>  mm/rmap.c            | 26 --------------------------
>  2 files changed, 30 insertions(+), 26 deletions(-)
> 
> diff --git a/mm/page_vma_mapped.c b/mm/page_vma_mapped.c
> index 63168b4baf19..13929f2418b0 100644
> --- a/mm/page_vma_mapped.c
> +++ b/mm/page_vma_mapped.c
> @@ -179,3 +179,33 @@ next_pte:        do {
>               }
>       }
>  }
> +
> +/**
> + * page_mapped_in_vma - check whether a page is really mapped in a VMA
> + * @page: the page to test
> + * @vma: the VMA to test
> + *
> + * Returns 1 if the page is mapped into the page tables of the VMA, 0
> + * if the page is not mapped into the page tables of this VMA.  Only
> + * valid for normal file or anonymous VMAs.
> + */
> +int page_mapped_in_vma(struct page *page, struct vm_area_struct *vma)
> +{
> +     struct page_vma_mapped_walk pvmw = {
> +             .page = page,
> +             .vma = vma,
> +             .flags = PVMW_SYNC,
> +     };
> +     unsigned long start, end;
> +
> +     start = __vma_address(page, vma);
> +     end = start + PAGE_SIZE * (hpage_nr_pages(page) - 1);
> +
> +     if (unlikely(end < vma->vm_start || start >= vma->vm_end))
> +             return 0;
> +     pvmw.address = max(start, vma->vm_start);

Nit: please see comment in the 05/12 patch in this series.

> +     if (!page_vma_mapped_walk(&pvmw))
> +             return 0;
> +     page_vma_mapped_walk_done(&pvmw);
> +     return 1;
> +}

Reply via email to