> On Jun 15, 2021, at 4:25 AM, Robin Murphy <robin.mur...@arm.com> wrote:
> 
> On 2021-06-07 19:25, Nadav Amit wrote:
>> From: Nadav Amit <na...@vmware.com>
>> On virtual machines, software must flush the IOTLB after each page table
>> entry update.
>> The iommu_map_sg() code iterates through the given scatter-gather list
>> and invokes iommu_map() for each element in the scatter-gather list,
>> which calls into the vendor IOMMU driver through iommu_ops callback. As
>> the result, a single sg mapping may lead to multiple IOTLB flushes.
>> Fix this by adding amd_iotlb_sync_map() callback and flushing at this
>> point after all sg mappings we set.
>> This commit is followed and inspired by commit 933fcd01e97e2
>> ("iommu/vt-d: Add iotlb_sync_map callback").
>> Cc: Joerg Roedel <j...@8bytes.org>
>> Cc: Will Deacon <w...@kernel.org>
>> Cc: Jiajun Cao <caojia...@vmware.com>
>> Cc: Robin Murphy <robin.mur...@arm.com>
>> Cc: Lu Baolu <baolu...@linux.intel.com>
>> Cc: iommu@lists.linux-foundation.org
>> Cc: linux-ker...@vger.kernel.org
>> Signed-off-by: Nadav Amit <na...@vmware.com>
>> ---
>>  drivers/iommu/amd/iommu.c | 15 ++++++++++++---
>>  1 file changed, 12 insertions(+), 3 deletions(-)
>> diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
>> index 128f2e889ced..dd23566f1db8 100644
>> --- a/drivers/iommu/amd/iommu.c
>> +++ b/drivers/iommu/amd/iommu.c
>> @@ -2027,6 +2027,16 @@ static int amd_iommu_attach_device(struct 
>> iommu_domain *dom,
>>      return ret;
>>  }
>>  +static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom,
>> +                                 unsigned long iova, size_t size)
>> +{
>> +    struct protection_domain *domain = to_pdomain(dom);
>> +    struct io_pgtable_ops *ops = &domain->iop.iop.ops;
>> +
>> +    if (ops->map)
> 
> Not too critical since you're only moving existing code around, but is 
> ops->map ever not set? Either way the check ends up looking rather 
> out-of-place here :/
> 
> It's not very clear what the original intent was - I do wonder whether it's 
> supposed to be related to PAGE_MODE_NONE, but given that amd_iommu_map() has 
> an explicit check and errors out early in that case, we'd never get here 
> anyway. Possibly something to come back and clean up later?

[ +Suravee ]

According to what I see in the git log, the checks for ops->map (as well as 
ops->unmap) were relatively recently introduced by Suravee [1] in preparation 
to AMD IOMMU v2 page tables [2]. Since I do not know what he plans, I prefer 
not to touch this code.

[1] 
https://lore.kernel.org/linux-iommu/20200923101442.73157-13-suravee.suthikulpa...@amd.com/
[2] 
https://lore.kernel.org/linux-iommu/20200923101442.73157-1-suravee.suthikulpa...@amd.com/
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to