Dong, Eddie wrote:
>> What about preemption:
>>
>> - vcpu executes lapic code in qemu process context
>>     
>
> Don't understand. LAPIC is in kernel, how can qemu access?
> If you mean qemu is calling APIC KVM syscall, then it already
> disabled preemption & take kvm->lock.
>
>
>   

I meant qemu executes the KVM_VCPU_RUN ioctl.  kvm->lock does not 
disable preemption (it is a mutex).

Do we really take kvm->lock for local accesses?  That's a significant 
problem, much more than the timer.

>> - process is preempted
>> - timer fires, touches lapic code
>>     
>
> See above.
>
>   
>> Furthermore, I question the necessity of this.  Taking a spinlock is a
>> couple dozen cycles on modern processors.  Entering the guest is a
>> couple thousand.  So what are we saving?
>>     
>
> I am stuck, can u explain more? We didn't enter guest more than before.
>
>   

What I'm saying is that the performance improvement is negligible, 
because the VT switch dominates the time.  Taking the lock is fast in 
comparison.

However I do like the simplification that removing the lock brings.

>> (migrating the timer is a good thing though).
>>
>> A different approach might be to wake up the vcpu like we do for irr,
>> with kvm_vcpu_kick(), and let the timer code be handled in process
>> context, so no locks need be taken.
>>
>>     
>
> This can be too, but will that be much efficient than timer migration?
> Usually the guest APIC timer is around 1ms, which means we have
> to VM Exit guest 1000HZ with cost of thousands of cycles each time.
> While the process migration can only happen at scheduler quantum 
> (10ms?) and highly depend on the scheduler policy. I guess the cost
> of hrtimer migration won;t exceed 1000 cycles in modern processor.
> And today the migration cost is already exceeds hundreds of cycles, 
> add this additional one won;t change many. So I think the cost using
> hrtimer migration is much cheap than kvm_vcpu_kick(), but I may miss
> something.
>
>   

I meant in addition to timer migration (I really like the timer 
migration part -- it's much more important than lock removal for 
performance). kvm_vcpu_kick() is needed to wake up from halt, or if we 
have races between the timer and task migration.

I'm thinking about something like the remote tlb flush logic:

- hrtimer callback sets a bit in vcpu->requests and calls kvm_vcpu_kick().
   - if the hrtimer and the vcpu are co-located, kvm_vcpu_kick() does 
nothing
   - if the guest is running on another cpu, it sends an IPI
   - if the vcpu is halted, it wakes it up
- when we reenter the guest again, we examine vcpu->requests; if 
KVM_REQ_LAPIC is set we know the timer expired and we run the lapic code 
(where we can examine the pending count).
- we also need to update hlt sleep to look at vcpu->requests

So, to summarize:
- timer migration always helps, regardless of what we do
- lock removal helps, IMO, not performance, but stability and maintainablity
- kvm is preemptible except in vcpu_load() and guest entry, so if we do 
lock removal we need to do everything in task context (nothing in 
interrupts except raising bits like irr)


Opinions?


-------------------------------------------------------------------------
This SF.net email is sponsored by: Splunk Inc.
Still grepping through log files to find problems?  Stop.
Now Search log events and configuration files using AJAX and a browser.
Download your FREE copy of Splunk now >>  http://get.splunk.com/
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to