On Tue, 2015-03-10 at 01:06 +1100, Alexey Kardashevskiy wrote:
> This checks that the TCE table page size is not bigger that the size of
> a page we just pinned and going to put its physical address to the table.
> 
> Otherwise the hardware gets unwanted access to physical memory between
> the end of the actual page and the end of the aligned up TCE page.
> 
> Since compound_order() and compound_head() work correctly on non-huge
> pages, there is no need for additional check whether the page is huge.
> 
> Signed-off-by: Alexey Kardashevskiy <a...@ozlabs.ru>
> ---
> Changes:
> v4:
> * s/tce_check_page_size/tce_page_is_contained/
> ---
>  drivers/vfio/vfio_iommu_spapr_tce.c | 22 ++++++++++++++++++++++
>  1 file changed, 22 insertions(+)
> 
> diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c 
> b/drivers/vfio/vfio_iommu_spapr_tce.c
> index 756831f..91e7599 100644
> --- a/drivers/vfio/vfio_iommu_spapr_tce.c
> +++ b/drivers/vfio/vfio_iommu_spapr_tce.c
> @@ -49,6 +49,22 @@ struct tce_container {
>       bool enabled;
>  };
>  
> +static bool tce_page_is_contained(struct page *page, unsigned page_shift)
> +{
> +     unsigned shift;
> +
> +     /*
> +      * Check that the TCE table granularity is not bigger than the size of
> +      * a page we just found. Otherwise the hardware can get access to
> +      * a bigger memory chunk that it should.
> +      */
> +     shift = PAGE_SHIFT + compound_order(compound_head(page));
> +     if (shift >= page_shift)
> +             return true;
> +
> +     return false;

nit, simplified:

return (PAGE_SHIFT + compound_order(compound_head(page) >= page_shift);

> +}
> +
>  static int tce_iommu_enable(struct tce_container *container)
>  {
>       int ret = 0;
> @@ -197,6 +213,12 @@ static long tce_iommu_build(struct tce_container 
> *container,
>                       ret = -EFAULT;
>                       break;
>               }
> +
> +             if (!tce_page_is_contained(page, tbl->it_page_shift)) {
> +                     ret = -EPERM;
> +                     break;
> +             }
> +
>               hva = (unsigned long) page_address(page) +
>                       (tce & IOMMU_PAGE_MASK(tbl) & ~PAGE_MASK);
>  



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to