On Fri, Feb 12, 2021 at 12:01:50PM -0700, Alex Williamson wrote:
> follow_pfn() doesn't make sure that we're using the correct page
> protections, get the pte with follow_pte() so that we can test
> protections and get the pfn from the pte.
> 
> Fixes: 5cbf3264bc71 ("vfio/type1: Fix VA->PA translation for PFNMAP VMAs in 
> vaddr_get_pfn()")
> Signed-off-by: Alex Williamson <alex.william...@redhat.com>
> ---
>  drivers/vfio/vfio_iommu_type1.c |   14 ++++++++++++--
>  1 file changed, 12 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/vfio/vfio_iommu_type1.c b/drivers/vfio/vfio_iommu_type1.c
> index ec9fd95a138b..90715413c3d9 100644
> --- a/drivers/vfio/vfio_iommu_type1.c
> +++ b/drivers/vfio/vfio_iommu_type1.c
> @@ -463,9 +463,11 @@ static int follow_fault_pfn(struct vm_area_struct *vma, 
> struct mm_struct *mm,
>                           unsigned long vaddr, unsigned long *pfn,
>                           bool write_fault)
>  {
> +     pte_t *ptep;
> +     spinlock_t *ptl;
>       int ret;
>  
> -     ret = follow_pfn(vma, vaddr, pfn);
> +     ret = follow_pte(vma->vm_mm, vaddr, NULL, &ptep, NULL, &ptl);
>       if (ret) {
>               bool unlocked = false;
>  
> @@ -479,9 +481,17 @@ static int follow_fault_pfn(struct vm_area_struct *vma, 
> struct mm_struct *mm,
>               if (ret)
>                       return ret;
>  
> -             ret = follow_pfn(vma, vaddr, pfn);
> +             ret = follow_pte(vma->vm_mm, vaddr, NULL, &ptep, NULL, &ptl);

commit 9fd6dad1261a541b3f5fa7dc5b152222306e6702 in linux-next is what
export's follow_pte and it uses a different signature:

+int follow_pte(struct mm_struct *mm, unsigned long address,
+              pte_t **ptepp, spinlock_t **ptlp)

Recommend you send this patch for rc1 once the right stuff lands in
Linus's tree

Otherwise it looks OK

Reviewed-by: Jason Gunthorpe <j...@nvidia.com>

Jason

Reply via email to