> On Jun 15, 2021, at 3:29 AM, Will Deacon <w...@kernel.org> wrote:
> 
> On Fri, Jun 11, 2021 at 09:50:31AM -0700, Nadav Amit wrote:
>> 
>> 
>>> On Jun 11, 2021, at 6:57 AM, Will Deacon <w...@kernel.org> wrote:
>>> 
>>> On Mon, Jun 07, 2021 at 11:25:39AM -0700, Nadav Amit wrote:
>>>> From: Nadav Amit <na...@vmware.com>
>>>> 
>>>> Refactor iommu_iotlb_gather_add_page() and factor out the logic that
>>>> detects whether IOTLB gather range and a new range are disjoint. To be
>>>> used by the next patch that implements different gathering logic for
>>>> AMD.
>>>> 
>>>> Cc: Joerg Roedel <j...@8bytes.org>
>>>> Cc: Will Deacon <w...@kernel.org>
>>>> Cc: Jiajun Cao <caojia...@vmware.com>
>>>> Cc: Robin Murphy <robin.mur...@arm.com>
>>>> Cc: Lu Baolu <baolu...@linux.intel.com>
>>>> Cc: iommu@lists.linux-foundation.org
>>>> Cc: linux-ker...@vger.kernel.org>
>>>> Signed-off-by: Nadav Amit <na...@vmware.com>
>>>> ---
>>>> include/linux/iommu.h | 41 +++++++++++++++++++++++++++++++++--------
>>>> 1 file changed, 33 insertions(+), 8 deletions(-)
>>> 
>>> [...]
>>> 
>>>> diff --git a/include/linux/iommu.h b/include/linux/iommu.h
>>>> index f254c62f3720..b5a2bfc68fb0 100644
>>>> --- a/include/linux/iommu.h
>>>> +++ b/include/linux/iommu.h
>>>> @@ -497,6 +497,28 @@ static inline void iommu_iotlb_sync(struct 
>>>> iommu_domain *domain,
>>>>    iommu_iotlb_gather_init(iotlb_gather);
>>>> }
>>>> 
>>>> +/**
>>>> + * iommu_iotlb_gather_is_disjoint - Checks whether a new range is disjoint
>>>> + *
>>>> + * @gather: TLB gather data
>>>> + * @iova: start of page to invalidate
>>>> + * @size: size of page to invalidate
>>>> + *
>>>> + * Helper for IOMMU drivers to check whether a new range is and the 
>>>> gathered
>>>> + * range are disjoint.
>>> 
>>> I can't quite parse this. Delete the "is"?
>> 
>> Indeed. Will do (I mean I will do ;-) )
>> 
>>> 
>>>>   For many IOMMUs, flushing the IOMMU in this case is
>>>> + * better than merging the two, which might lead to unnecessary 
>>>> invalidations.
>>>> + */
>>>> +static inline
>>>> +bool iommu_iotlb_gather_is_disjoint(struct iommu_iotlb_gather *gather,
>>>> +                              unsigned long iova, size_t size)
>>>> +{
>>>> +  unsigned long start = iova, end = start + size - 1;
>>>> +
>>>> +  return gather->end != 0 &&
>>>> +          (end + 1 < gather->start || start > gather->end + 1);
>>>> +}
>>>> +
>>>> +
>>>> /**
>>>> * iommu_iotlb_gather_add_range - Gather for address-based TLB invalidation
>>>> * @gather: TLB gather data
>>>> @@ -533,20 +555,16 @@ static inline void 
>>>> iommu_iotlb_gather_add_page(struct iommu_domain *domain,
>>>>                                           struct iommu_iotlb_gather 
>>>> *gather,
>>>>                                           unsigned long iova, size_t size)
>>>> {
>>>> -  unsigned long start = iova, end = start + size - 1;
>>>> -
>>>>    /*
>>>>     * If the new page is disjoint from the current range or is mapped at
>>>>     * a different granularity, then sync the TLB so that the gather
>>>>     * structure can be rewritten.
>>>>     */
>>>> -  if (gather->pgsize != size ||
>>>> -      end + 1 < gather->start || start > gather->end + 1) {
>>>> -          if (gather->pgsize)
>>>> -                  iommu_iotlb_sync(domain, gather);
>>>> -          gather->pgsize = size;
>>>> -  }
>>>> +  if ((gather->pgsize && gather->pgsize != size) ||
>>>> +      iommu_iotlb_gather_is_disjoint(gather, iova, size))
>>>> +          iommu_iotlb_sync(domain, gather);
>>>> 
>>>> +  gather->pgsize = size;
>>> 
>>> Why have you made this unconditional? I think it's ok, but just not sure
>>> if it's necessary or not.
>> 
>> In regard to gather->pgsize, this function had (and has) an
>> invariant, in which gather->pgsize always represents the flushing
>> granularity of its range. Arguably, “size" should never be
>> zero, but lets assume for the matter of discussion that it might.
>> 
>> If “size” equals to “gather->pgsize”, then the assignment in
>> question has no impact.
>> 
>> Otherwise, if “size” is non-zero, then iommu_iotlb_sync() would
>> initialize the size and range (see iommu_iotlb_gather_init()),
>> and the invariant is kept.
>> 
>> Otherwise, “size” is zero, and “gather” already holds a range,
>> so gather->pgsize is non-zero and
>> (gather->pgsize && gather->pgsize != size) is true. Therefore,
>> again, iommu_iotlb_sync() would be called and initialize the
>> size.
>> 
>> I think that this change makes the code much simpler to read.
>> It probably has no performance impact as “gather” is probably
>> cached and anyhow accessed shortly after.
> 
> Thanks. I was just interested in whether it had a functional impact (I don't
> think it does) or whether it was just cleanup.
> 
> With the updated comment:
> 
> Acked-by: Will Deacon <w...@kernel.org>

Thanks. I will add the explanation to the commit log, but not to the code in 
order not to inflate it too much.

Attachment: signature.asc
Description: Message signed with OpenPGP

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to