On Wed, Feb 06, 2019 at 03:28:28PM +0000, Robin Murphy wrote: > Because if iommu_map() only gets called at PAGE_SIZE granularity, then the > IOMMU PTEs will be created at PAGE_SIZE (or smaller) granularity, so any > effort to get higher-order allocations matching larger IOMMU block sizes is > wasted, and we may as well have just done this: > > for (i = 0; i < count; i++) { > struct page *page = alloc_page(gfp); > ... > iommu_map(..., page_to_phys(page), PAGE_SIZE, ...); > }
True. I've dropped this patch. > Really, it's a shame we have to split huge pages for the CPU remap, since > in the common case the CPU MMU will have a matching block size, but IIRC > there was something in vmap() or thereabouts that explicitly chokes on > them. That just needs a volunteer to fix the implementation, as there is no fundamental reason not to remap large pages. _______________________________________________ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu