Re: [PATCH v3 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

2018-12-10 Thread Robin Murphy

On 07/12/2018 20:41, Souptick Joarder wrote:

On Fri, Dec 7, 2018 at 7:17 PM Robin Murphy  wrote:


On 06/12/2018 18:43, Souptick Joarder wrote:

Convert to use vm_insert_range() to map range of kernel
memory to user vma.

Signed-off-by: Souptick Joarder 
Reviewed-by: Matthew Wilcox 
---
   drivers/iommu/dma-iommu.c | 13 +++--
   1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d1b0475..a2c65e2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -622,17 +622,10 @@ struct page **iommu_dma_alloc(struct device *dev, size_t 
size, gfp_t gfp,

   int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct 
*vma)
   {
- unsigned long uaddr = vma->vm_start;
- unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
- int ret = -ENXIO;
+ unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;

- for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
- ret = vm_insert_page(vma, uaddr, pages[i]);
- if (ret)
- break;
- uaddr += PAGE_SIZE;
- }
- return ret;
+ return vm_insert_range(vma, vma->vm_start,
+ pages + vma->vm_pgoff, count);


You also need to adjust count to compensate for the pages skipped by
vm_pgoff, otherwise you've got an out-of-bounds dereference triggered
from userspace, which is pretty high up the "not good" scale (not to
mention the entire call would then propagate -EFAULT back from
vm_insert_page() and thus always appear to fail for nonzero offsets).


So this should something similar to ->

 return vm_insert_range(vma, vma->vm_start,
 pages + vma->vm_pgoff, count - vma->vm_pgoff);



Yup, I think that looks appropriate.

Robin.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

2018-12-07 Thread Souptick Joarder
On Fri, Dec 7, 2018 at 7:17 PM Robin Murphy  wrote:
>
> On 06/12/2018 18:43, Souptick Joarder wrote:
> > Convert to use vm_insert_range() to map range of kernel
> > memory to user vma.
> >
> > Signed-off-by: Souptick Joarder 
> > Reviewed-by: Matthew Wilcox 
> > ---
> >   drivers/iommu/dma-iommu.c | 13 +++--
> >   1 file changed, 3 insertions(+), 10 deletions(-)
> >
> > diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> > index d1b0475..a2c65e2 100644
> > --- a/drivers/iommu/dma-iommu.c
> > +++ b/drivers/iommu/dma-iommu.c
> > @@ -622,17 +622,10 @@ struct page **iommu_dma_alloc(struct device *dev, 
> > size_t size, gfp_t gfp,
> >
> >   int iommu_dma_mmap(struct page **pages, size_t size, struct 
> > vm_area_struct *vma)
> >   {
> > - unsigned long uaddr = vma->vm_start;
> > - unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> > - int ret = -ENXIO;
> > + unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
> >
> > - for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {
> > - ret = vm_insert_page(vma, uaddr, pages[i]);
> > - if (ret)
> > - break;
> > - uaddr += PAGE_SIZE;
> > - }
> > - return ret;
> > + return vm_insert_range(vma, vma->vm_start,
> > + pages + vma->vm_pgoff, count);
>
> You also need to adjust count to compensate for the pages skipped by
> vm_pgoff, otherwise you've got an out-of-bounds dereference triggered
> from userspace, which is pretty high up the "not good" scale (not to
> mention the entire call would then propagate -EFAULT back from
> vm_insert_page() and thus always appear to fail for nonzero offsets).

So this should something similar to ->

return vm_insert_range(vma, vma->vm_start,
pages + vma->vm_pgoff, count - vma->vm_pgoff);
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 6/9] iommu/dma-iommu.c: Convert to use vm_insert_range

2018-12-07 Thread Robin Murphy

On 06/12/2018 18:43, Souptick Joarder wrote:

Convert to use vm_insert_range() to map range of kernel
memory to user vma.

Signed-off-by: Souptick Joarder 
Reviewed-by: Matthew Wilcox 
---
  drivers/iommu/dma-iommu.c | 13 +++--
  1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index d1b0475..a2c65e2 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -622,17 +622,10 @@ struct page **iommu_dma_alloc(struct device *dev, size_t 
size, gfp_t gfp,
  
  int iommu_dma_mmap(struct page **pages, size_t size, struct vm_area_struct *vma)

  {
-   unsigned long uaddr = vma->vm_start;
-   unsigned int i, count = PAGE_ALIGN(size) >> PAGE_SHIFT;
-   int ret = -ENXIO;
+   unsigned long count = PAGE_ALIGN(size) >> PAGE_SHIFT;
  
-	for (i = vma->vm_pgoff; i < count && uaddr < vma->vm_end; i++) {

-   ret = vm_insert_page(vma, uaddr, pages[i]);
-   if (ret)
-   break;
-   uaddr += PAGE_SIZE;
-   }
-   return ret;
+   return vm_insert_range(vma, vma->vm_start,
+   pages + vma->vm_pgoff, count);


You also need to adjust count to compensate for the pages skipped by 
vm_pgoff, otherwise you've got an out-of-bounds dereference triggered 
from userspace, which is pretty high up the "not good" scale (not to 
mention the entire call would then propagate -EFAULT back from 
vm_insert_page() and thus always appear to fail for nonzero offsets).


Robin.


  }
  
  static dma_addr_t __iommu_dma_map(struct device *dev, phys_addr_t phys,



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu