On 6/6/2017 7:05 AM, Joerg Roedel wrote:
Hey Tom,
Hi Joerg,
On Mon, Jun 05, 2017 at 02:52:35PM -0500, Tom Lendacky wrote:
After reducing the amount of MMIO performed by the IOMMU during operation,
perf data shows that flushing the TLB for all protection domains during
DMA unmapping is a performance issue. It is not necessary to flush the
TLBs for all protection domains, only the protection domains associated
with iova's on the flush queue.
Create a separate queue that tracks the protection domains associated with
the iova's on the flush queue. This new queue optimizes the flushing of
TLBs to the required protection domains.
Reviewed-by: Arindam Nath <arindam.n...@amd.com>
Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
---
drivers/iommu/amd_iommu.c | 56 ++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 50 insertions(+), 6 deletions(-)
I also did a major rewrite of the AMD IOMMU queue handling and flushing
code last week. It is functionally complete and I am currently testing,
documenting it, and cleaning it up. I pushed the current state of it to
git://git.kernel.org/pub/scm/linux/kernel/git/joro/linux.git amd-iommu
Its quite intrusive as it implements a per-domain flush-queue, and uses
a ring-buffer instead of a real queue. But you see the details in the
code.
Can you please have a look and give it a test in your setup?
I'll try and look at this as soon as I can... I'm sharing the test
setup and I might not be able to get access again for a day or two.
Thanks,
Tom
Thanks,
Joerg
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu