>-----Original Message-----
>From: CLEMENT MATHIEU--DRIF <clement.mathieu--d...@eviden.com>
>Subject: Re: [PATCH rfcv2 09/17] intel_iommu: Flush stage-1 cache in iotlb
>invalidation
>
>Hi Zhenzhong
>
>On 22/05/2024 08:23, Zhenzhong Duan wrote:
>> Caution: External email. Do not open attachments or click links, unless this
>email comes from a known sender and you know the content is safe.
>>
>>
>> According to spec, Page-Selective-within-Domain Invalidation (11b):
>>
>> 1. IOTLB entries caching second-stage mappings (PGTT=010b) or pass-
>through
>> (PGTT=100b) mappings associated with the specified domain-id and the
>> input-address range are invalidated.
>> 2. IOTLB entries caching first-stage (PGTT=001b) or nested (PGTT=011b)
>> mapping associated with specified domain-id are invalidated.
>>
>> So per spec definition the Page-Selective-within-Domain Invalidation
>> needs to flush first stage and nested cached IOTLB enties as well.
>>
>> We don't support nested yet and pass-through mapping is never cached,
>> so what in iotlb cache are only first-stage and second-stage mappings.
>>
>> Add a tag pgtt in VTDIOTLBEntry to mark PGTT type of the mapping and
>> invalidate entries based on PGTT type.
>>
>> Signed-off-by: Zhenzhong Duan <zhenzhong.d...@intel.com>
>> ---
>>   include/hw/i386/intel_iommu.h |  1 +
>>   hw/i386/intel_iommu.c         | 20 +++++++++++++++++---
>>   2 files changed, 18 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/hw/i386/intel_iommu.h
>b/include/hw/i386/intel_iommu.h
>> index 011f374883..b0d5b5a5be 100644
>> --- a/include/hw/i386/intel_iommu.h
>> +++ b/include/hw/i386/intel_iommu.h
>> @@ -156,6 +156,7 @@ struct VTDIOTLBEntry {
>>       uint64_t pte;
>>       uint64_t mask;
>>       uint8_t access_flags;
>> +    uint8_t pgtt;
>>   };
>>
>>   /* VT-d Source-ID Qualifier types */
>> diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c
>> index 0801112e2e..0078bad9d4 100644
>> --- a/hw/i386/intel_iommu.c
>> +++ b/hw/i386/intel_iommu.c
>> @@ -287,9 +287,21 @@ static gboolean
>vtd_hash_remove_by_page(gpointer key, gpointer value,
>>       VTDIOTLBPageInvInfo *info = (VTDIOTLBPageInvInfo *)user_data;
>>       uint64_t gfn = (info->addr >> VTD_PAGE_SHIFT_4K) & info->mask;
>>       uint64_t gfn_tlb = (info->addr & entry->mask) >> VTD_PAGE_SHIFT_4K;
>> -    return (entry->domain_id == info->domain_id) &&
>> -            (((entry->gfn & info->mask) == gfn) ||
>> -             (entry->gfn == gfn_tlb));
>> +
>> +    if (entry->domain_id != info->domain_id) {
>> +        return false;
>> +    }
>> +
>> +    /*
>> +     * According to spec, IOTLB entries caching first-stage (PGTT=001b) or
>> +     * nested (PGTT=011b) mapping associated with specified domain-id
>are
>> +     * invalidated. Nested isn't supported yet, so only need to check 001b.
>> +     */
>> +    if (entry->pgtt == VTD_SM_PASID_ENTRY_FLT) {
>> +        return true;
>> +    }
>> +
>> +    return (entry->gfn & info->mask) == gfn || entry->gfn == gfn_tlb;
>>   }
>>
>>   /* Reset all the gen of VTDAddressSpace to zero and set the gen of
>> @@ -382,6 +394,8 @@ static void vtd_update_iotlb(IntelIOMMUState *s,
>uint16_t source_id,
>>       entry->access_flags = access_flags;
>>       entry->mask = vtd_slpt_level_page_mask(level);
>>       entry->pasid = pasid;
>> +    entry->pgtt = s->scalable_modern ? VTD_SM_PASID_ENTRY_FLT
>> +                                     : VTD_SM_PASID_ENTRY_SLT;
>What about passing pgtt as a parameter so that the translation type
>detection is done only once (in vtd_do_iommu_translate)?

Good idea, will do.

Thanks
Zhenzhong

>>
>>       key->gfn = gfn;
>>       key->sid = source_id;
>> --
>> 2.34.1
>>
>#cmd

Reply via email to