On 2020-07-13 14:14, Joerg Roedel wrote:
On Wed, Jul 08, 2020 at 12:32:42PM +0100, Robin Murphy wrote:
As for the intel-iommu implementation, relegate the opportunistic
attempt to allocate a SAC address to the domain of conventional PCI
devices only, to avoid it increasingly causing far more performance
issues than possible benefits on modern PCI Express systems.

Signed-off-by: Robin Murphy <[email protected]>
---
  drivers/iommu/dma-iommu.c | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 4959f5df21bd..0ff124f16ad4 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -426,7 +426,8 @@ static dma_addr_t iommu_dma_alloc_iova(struct iommu_domain 
*domain,
                dma_limit = min(dma_limit, (u64)domain->geometry.aperture_end);
/* Try to get PCI devices a SAC address */
-       if (dma_limit > DMA_BIT_MASK(32) && dev_is_pci(dev))
+       if (dma_limit > DMA_BIT_MASK(32) &&
+           dev_is_pci(dev) && !pci_is_pcie(to_pci_dev(dev)))
                iova = alloc_iova_fast(iovad, iova_len,
                                       DMA_BIT_MASK(32) >> shift, false);

Unfortunatly this patch causes XHCI initialization failures on my AMD
Ryzen system. I will remove both from the IOMMU tree for now.

I guess the XHCI chip in my system does not support full 64bit dma
addresses and needs a quirk or something like that. But until this is
resolved its better to not change the IOVA allocation behavior.

Oh bother - yes, this could have been masking all manner of bugs. That system will presumably also break if you managed to exhaust the 32-bit IOVA space such that the allocator moved up to the higher range anyway, or if you passed the XHCI through to a VM with a sufficiently wacky GPA layout, but I guess those are cases that simply nobody's run into yet.

Does the firmware actually report any upper address constraint such that Sebastian's IVRS aperture patches might help?

Robin.

Reply via email to