On 13/07/2017 04:53, peng.h...@zte.com.cn wrote:
> > I think what you're seeing is a race like this:
> > 
> >     VCPU 0                           VCPU 1
> >     [qemu] kvm_get_mp_state
> >       [kvm] kvm_apic_accept_events
> >                                     __apic_accept_irq
> >                                     set KVM_APIC_SIPI
> >     [qemu] kvm_get_vcpu_events
>
> I suspect that sipi_vector is lost when hotplug cpu in some time.
> 
> VCPU0                               VCPU1 (hotplug)
> [kvm] apic_send_ipi
>   [kvm] __apic_accept_irq
>     [kvm] set vcpu1.sipi_vector      
>     [kvm] set KVM_APIC_SIPI
> [kvm] wakeup vcpu1 thread           [qemu] kvm_put_vcpu_events
>                                        [kvm] set vcpu1.sipi_vector=0
>                                        [kvm] kvm_apic_accept_events
>                                        [kvm] 
> kvm_vcpu_deliver_sipi_vector(sipi_vector=0)

So I suggest the following changes in QEMU:

- reorder kvm_get_vcpu_events, then kvm_get_mp_state, then the others.
This is just to be safe and ensure that a KVM_APIC_SIPI event is not lost.

- don't set KVM_VCPUEVENT_VALID_SIPI_VECTOR unless the mp_state is
KVM_MP_STATE_SIPI_RECEIVED (which will only happen for old kernels).

- call kvm_put_mp_state after kvm_put_vcpu_events, so that KVM_APIC_SIPI
is only set after the sipi_vector is in place.

Thanks,

Paolo

Reply via email to