> Still, I think we have dynamic polling to mitigate this overhead;
> how was it behaving?

Correctly: the polling stopped as soon as the benchmark ended. :)

> I noticed a questionable decision in growing the window:
> we know how long the polling should have been (block_ns), but we do not
> use that information to set the next halt_poll_ns.
> 
> Has something like this been tried?
> 
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index f0fe9d02f6bb..d8dbf50957fc 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2193,7 +2193,7 @@ void kvm_vcpu_block(struct kvm_vcpu *vcpu)
>               /* we had a short halt and our poll time is too small */
>               else if (vcpu->halt_poll_ns < halt_poll_ns &&
>                       block_ns < halt_poll_ns)
> -                     grow_halt_poll_ns(vcpu);
> +                     vcpu->halt_poll_ns = block_ns /* + x ? */;

IIUC the idea was to grow slower than just, say, 10 ns -> 150 ns.
Taking into account block_ns might also be useful, but it shouldn't
matter much since the shrinking is very aggressive.

Paolo

>       } else
>               vcpu->halt_poll_ns = 0;
>  
> 
> It would avoid a case where several halts in a row were interrupted
> after 300 us, but on the first one we'd schedule out after 10 us, then
> after 20, 40, 80, 160, and finally have the successful poll at 320 us,
> but we have just wasted time if the window is reset at any point before
> that.
> 
> (I really don't like benchmarking ...)
> 
> Thanks.
> 

Reply via email to