Hi Eric,

On 2021/3/19 21:15, Auger Eric wrote:
> Hi Keqian,
> 
> On 3/2/21 9:35 AM, Keqian Zhu wrote:
>> Hi Eric,
>>
>> On 2021/2/24 4:56, Eric Auger wrote:
>>> On attach_pasid_table() we program STE S1 related info set
>>> by the guest into the actual physical STEs. At minimum
>>> we need to program the context descriptor GPA and compute
>>> whether the stage1 is translated/bypassed or aborted.
>>>
>>> On detach, the stage 1 config is unset and the abort flag is
>>> unset.
>>>
>>> Signed-off-by: Eric Auger <eric.au...@redhat.com>
>>>
>> [...]
>>
>>> +
>>> +           /*
>>> +            * we currently support a single CD so s1fmt and s1dss
>>> +            * fields are also ignored
>>> +            */
>>> +           if (cfg->pasid_bits)
>>> +                   goto out;
>>> +
>>> +           smmu_domain->s1_cfg.cdcfg.cdtab_dma = cfg->base_ptr;
>> only the "cdtab_dma" field of "cdcfg" is set, we are not able to locate a 
>> specific cd using arm_smmu_get_cd_ptr().
>>
>> Maybe we'd better use a specialized function to fill other fields of "cdcfg" 
>> or add a sanity check in arm_smmu_get_cd_ptr()
>> to prevent calling it under nested mode?
>>
>> As now we just call arm_smmu_get_cd_ptr() during finalise_s1(), no problem 
>> found. Just a suggestion ;-)
> 
> forgive me for the delay. yes I can indeed make sure that code is not
> called in nested mode. Please could you detail why you would need to
> call arm_smmu_get_cd_ptr()?
I accidentally called this function in nested mode when verify the smmu mpam 
feature. :)

Yes, in nested mode, context descriptor is owned by guest, hypervisor does not 
need to care about its content.
Maybe we'd better give an explicit comment for arm_smmu_get_cd_ptr() to let 
coder pay attention to this? :)

Thanks,
Keqian

> 
> Thanks
> 
> Eric
>>
>> Thanks,
>> Keqian
>>
>>
>>> +           smmu_domain->s1_cfg.set = true;
>>> +           smmu_domain->abort = false;
>>> +           break;
>>> +   default:
>>> +           goto out;
>>> +   }
>>> +   spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>>> +   list_for_each_entry(master, &smmu_domain->devices, domain_head)
>>> +           arm_smmu_install_ste_for_dev(master);
>>> +   spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>>> +   ret = 0;
>>> +out:
>>> +   mutex_unlock(&smmu_domain->init_mutex);
>>> +   return ret;
>>> +}
>>> +
>>> +static void arm_smmu_detach_pasid_table(struct iommu_domain *domain)
>>> +{
>>> +   struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
>>> +   struct arm_smmu_master *master;
>>> +   unsigned long flags;
>>> +
>>> +   mutex_lock(&smmu_domain->init_mutex);
>>> +
>>> +   if (smmu_domain->stage != ARM_SMMU_DOMAIN_NESTED)
>>> +           goto unlock;
>>> +
>>> +   smmu_domain->s1_cfg.set = false;
>>> +   smmu_domain->abort = false;
>>> +
>>> +   spin_lock_irqsave(&smmu_domain->devices_lock, flags);
>>> +   list_for_each_entry(master, &smmu_domain->devices, domain_head)
>>> +           arm_smmu_install_ste_for_dev(master);
>>> +   spin_unlock_irqrestore(&smmu_domain->devices_lock, flags);
>>> +
>>> +unlock:
>>> +   mutex_unlock(&smmu_domain->init_mutex);
>>> +}
>>> +
>>>  static bool arm_smmu_dev_has_feature(struct device *dev,
>>>                                  enum iommu_dev_features feat)
>>>  {
>>> @@ -2939,6 +3026,8 @@ static struct iommu_ops arm_smmu_ops = {
>>>     .of_xlate               = arm_smmu_of_xlate,
>>>     .get_resv_regions       = arm_smmu_get_resv_regions,
>>>     .put_resv_regions       = generic_iommu_put_resv_regions,
>>> +   .attach_pasid_table     = arm_smmu_attach_pasid_table,
>>> +   .detach_pasid_table     = arm_smmu_detach_pasid_table,
>>>     .dev_has_feat           = arm_smmu_dev_has_feature,
>>>     .dev_feat_enabled       = arm_smmu_dev_feature_enabled,
>>>     .dev_enable_feat        = arm_smmu_dev_enable_feature,
>>>
>>
> 
> .
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to