Alex / Joerg,
On 1/24/18 5:04 AM, Alex Williamson wrote:
@@ -648,12 +685,40 @@ static int vfio_iommu_type1_unpin_pages(void *iommu_data,
return i > npage ? npage : (i > 0 ? i : -EINVAL);
}
+static size_t try_unmap_unpin_fast(struct vfio_domain *domain, dma_addr_t iova,
+
Alex/Joerg,
On 1/24/18 5:04 AM, Alex Williamson wrote:
+static size_t try_unmap_unpin_fast(struct vfio_domain *domain, dma_addr_t iova,
+ size_t len, phys_addr_t phys,
+ struct list_head *unmapped_regions)
+{
+ struct
Hi Suravee,
On Sun, 21 Jan 2018 23:29:37 -0500
Suravee Suthikulpanit wrote:
> VFIO IOMMU type1 currently upmaps IOVA pages synchronously, which requires
> IOTLB flushing for every unmapping. This results in large IOTLB flushing
> overhead when handling
VFIO IOMMU type1 currently upmaps IOVA pages synchronously, which requires
IOTLB flushing for every unmapping. This results in large IOTLB flushing
overhead when handling pass-through devices has a large number of mapped
IOVAs (e.g. GPUs). This could also cause IOTLB invalidate time-out issue
on