Joerg,

On 1/17/19 3:44 PM, Suravee Suthikulpanit wrote:
> Joerg,
> 
> On 1/17/19 12:08 AM, j...@8bytes.org wrote:
>> On Wed, Jan 16, 2019 at 02:08:55PM +0000, Suthikulpanit, Suravee wrote:
>>> Actually, I am not sure how we would be missing the flush on the last 
>>> device.
>>> In my test, I am seeing the flush command being issued correctly during
>>> vfio_unmap_unpin(), which is after all devices are detached.
>>> Although, I might be missing your point here. Could you please elaborate?
>>
>> Okay, you are right, the patch effectivly adds an unconditional flush of
>> the domain on all iommus when the last device is detached. So it is
>> correct in that regard. But that code path is also used in the
>> iommu_unmap() path.
>>
>> The problem now is, that with your change we send flush commands to all
>> IOMMUs in the unmap path when no device is attached to the domain.
>> This will hurt performance there, no?
>>
>> Regards,
>>
>>     Joerg
>>
> 
> Sounds like we need a way to track state of each IOMMU for a domain.
> What if we define the following:
> 
>    enum IOMMU_DOMAIN_STATES {
>      DOMAIN_FREE = -1,
>      DOMAIN_DETACHED = 0,
>      DOMAIN_ATTACHED >= 1
>    }
> 
> We should be able to use the dev_iommu[] to help track the state.
>      - In amd_iommu_domain_alloc, we initialize the array to DOMAIN_FREE
>      - In do_attach(), we change to DOMAIN_ATTACH or we can increment the 
> count
>        if it is already in DOMAIN_ATTACH state.
>      - In do_detach(). we change to DOMAIN_DETACH.
> 
> Then, in __domain_flush_pages, we issue command when the dev_iommu[] >= 0.
> This should preserve previous behavior, and only add flushing condition to
> the specific IOMMU in detached state. Please let me know what you think.
> 
> Regards,
> Suravee

By the way, I just sent V2 of this patch since it might be more clear.

Regards,
Suravee
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to