Hi Marc,

On 7/23/19 2:43 PM, Marc Zyngier wrote:
> On 23/07/2019 13:25, Auger Eric wrote:
>> Hi Marc,
>>
>> On 7/22/19 12:54 PM, Marc Zyngier wrote:
>>> Hi Eric,
>>>
>>> On 01/07/2019 13:38, Auger Eric wrote:
>>>> Hi Marc,
>>>>
>>>> On 6/11/19 7:03 PM, Marc Zyngier wrote:
>>>>> The LPI translation cache needs to be discarded when an ITS command
>>>>> may affect the translation of an LPI (DISCARD and MAPD with V=0) or
>>>>> the routing of an LPI to a redistributor with disabled LPIs (MOVI,
>>>>> MOVALL).
>>>>>
>>>>> We decide to perform a full invalidation of the cache, irrespective
>>>>> of the LPI that is affected. Commands are supposed to be rare enough
>>>>> that it doesn't matter.
>>>>>
>>>>> Signed-off-by: Marc Zyngier <marc.zyng...@arm.com>
>>>>> ---
>>>>>  virt/kvm/arm/vgic/vgic-its.c | 8 ++++++++
>>>>>  1 file changed, 8 insertions(+)
>>>>>
>>>>> diff --git a/virt/kvm/arm/vgic/vgic-its.c b/virt/kvm/arm/vgic/vgic-its.c
>>>>> index 9b6b66204b97..5254bb762e1b 100644
>>>>> --- a/virt/kvm/arm/vgic/vgic-its.c
>>>>> +++ b/virt/kvm/arm/vgic/vgic-its.c
>>>>> @@ -733,6 +733,8 @@ static int vgic_its_cmd_handle_discard(struct kvm 
>>>>> *kvm, struct vgic_its *its,
>>>>>            * don't bother here since we clear the ITTE anyway and the
>>>>>            * pending state is a property of the ITTE struct.
>>>>>            */
>>>>> +         vgic_its_invalidate_cache(kvm);
>>>>> +
>>>>>           its_free_ite(kvm, ite);
>>>>>           return 0;
>>>>>   }
>>>>> @@ -768,6 +770,8 @@ static int vgic_its_cmd_handle_movi(struct kvm *kvm, 
>>>>> struct vgic_its *its,
>>>>>   ite->collection = collection;
>>>>>   vcpu = kvm_get_vcpu(kvm, collection->target_addr);
>>>>>  
>>>>> + vgic_its_invalidate_cache(kvm);
>>>>> +
>>>>>   return update_affinity(ite->irq, vcpu);
>>>>>  }
>>>>>  
>>>>> @@ -996,6 +1000,8 @@ static void vgic_its_free_device(struct kvm *kvm, 
>>>>> struct its_device *device)
>>>>>   list_for_each_entry_safe(ite, temp, &device->itt_head, ite_list)
>>>>>           its_free_ite(kvm, ite);
>>>>>  
>>>>> + vgic_its_invalidate_cache(kvm);
>>>>> +
>>>>>   list_del(&device->dev_list);
>>>>>   kfree(device);
>>>>>  }
>>>>> @@ -1249,6 +1255,8 @@ static int vgic_its_cmd_handle_movall(struct kvm 
>>>>> *kvm, struct vgic_its *its,
>>>>>           vgic_put_irq(kvm, irq);
>>>>>   }
>>>>>  
>>>>> + vgic_its_invalidate_cache(kvm);
>>>> All the commands are executed with the its_lock held. Now we don't take
>>>> it anymore on the fast cache injection path. Don't we have a window
>>>> where the move has been applied at table level and the cache is not yet
>>>> invalidated? Same question for vgic_its_free_device().
>>>
>>> There is definitely a race, but that race is invisible from the guest's
>>> perspective. The guest can only assume that the command has taken effect
>>> once a SYNC command has been executed, and it cannot observe that the
>>> SYNC command has been executed before we have invalidated the cache.
>>>
>>> Does this answer your question?
>>
>> OK make sense. Thank you for the clarification
>>
>> Another question, don't we need to invalidate the cache on  MAPC V=0 as
>> well? Removing the mapping of the collection results in interrupts
>> belonging to that collection being ignored. If we don't flush the
>> pending bit will be set?
> 
> Yup, that's a good point. I think i had that at some point, and ended up 
> dropping it, probably missing the point that the interrupt would be made 
> pending.
> 
> I'll add this:
> 
> @@ -1218,6 +1218,7 @@ static int vgic_its_cmd_handle_mapc(struct kvm *kvm, 
> struct vgic_its *its,
>  
>       if (!valid) {
>               vgic_its_free_collection(its, coll_id);
> +             vgic_its_invalidate_cache(kvm);
>       } else {
>               collection = find_collection(its, coll_id);
>  
Yep, with that change feel free to add my R-b

Reviewed-by: Eric Auger <eric.au...@redhat.com>

Thanks

Eric
> 
> Thanks,
> 
>       M.
> 
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to