> + * Turns out AMD IOMMU has a page table bug where it won't map large pages
> + * to a region that previously mapped smaller pages.  This should be fixed
> + * soon, so this is just a temporary workaround to break mappings down into
> + * PAGE_SIZE.  Better to map smaller pages than nothing.
> + */
> +static int map_try_harder(struct vfio_iommu *iommu, dma_addr_t iova,
> +                       unsigned long pfn, long npage, int prot)
> +{
> +     long i;
> +     int ret;
> +
> +     for (i = 0; i < npage; i++, pfn++, iova += PAGE_SIZE) {
> +             ret = iommu_map(iommu->domain, iova,
> +                             (phys_addr_t)pfn << PAGE_SHIFT,
> +                             PAGE_SIZE, prot);
> +             if (ret)
> +                     break;
> +     }
> +
> +     for (; i < npage && i > 0; i--, iova -= PAGE_SIZE)
> +             iommu_unmap(iommu->domain, iova, PAGE_SIZE);
> +
>       return ret;
>  }

This looks to belong to a vfio-quirk file (a something else) that deals with
various IOMMU's quirks.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to