>>> +    for (inst = 0; inst < nvidia_smmu->num_inst; inst++) {
>>> +            irq_ret = nvidia_smmu_global_fault_inst(irq, smmu, inst);
>>> +            if (irq_ret == IRQ_HANDLED)
>>> +                    return irq_ret;
>>
>> Any chance there could be more than one SMMU faulting by the time we 
>> service the interrupt?

>It certainly seems plausible if the interconnect is automatically 
>load-balancing requests across the SMMU instances - say a driver bug caused a 
>buffer to be unmapped too early, there could be many in-flight accesses to 
>parts of that buffer that aren't all taking the same path and thus could now 
>fault in parallel.
>[ And anyone inclined to nitpick global vs. context faults, s/unmap a 
>buffer/tear down a domain/ ;) ]
>Either way I think it would be easier to reason about if we just handled these 
>like a typical shared interrupt and always checked all the instances.

It would be optimal to check at the same time across all instances. 

>>> +            for (idx = 0; idx < smmu->num_context_banks; idx++) {
>>> +                    irq_ret = nvidia_smmu_context_fault_bank(irq, smmu,
>>> +                                                             idx, 
>>> + inst);
>>> +
>>> +                    if (irq_ret == IRQ_HANDLED)
>>> +                            return irq_ret;
>>
>> Any reason why we don't check all banks?

>As above, we certainly shouldn't bail out without checking the bank for the 
>offending domain across all of its instances, and I guess the way this works 
>means that we would have to iterate all the banks to achieve that.

With shared irq line, the context fault identification is not optimal already.  
Reading all the context banks all the time can be additional mmio read 
overhead. But, it may not hurt the real use cases as these happen only when 
there are bugs. 

-KR
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to