On 04/13/2010 03:50 AM, Zhang, Xiantao wrote:
Avi Kivity wrote:
On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
What was the performance hit?  What was your I/O setup (image
format, using aio?)

The issue only happens when vcpu number is over-committed(e.g.
vcpu/pcpu>2) and physical cpus are saturated. For example,  when run
webbench in windows OS in this case, its performance drops by 80%.
In our experiment, we are using image file through virtio, and I
think aio should be used by default also.

Is this on a machine that does pause-loop exits?  The current handing
of PLE is very suboptimal.  With proper directed yield we should be
much better there.

Without PLE, we need paravirtualized spinlocks, no way around it.
PLE has the ability to eliminate the issue at some extent, and pv solution 
should be helpful also.  But for windows guests running on machines without 
PLE, we still needs to enhance host side to resolve the issue.

Well, was this on a machine with PLE or without PLE?

Spin loops need to be addressed first, they are known to kill
performance in overcommit situations.
Even in overcommit case, if vcpu threads of one qemu are not scheduled or 
pulled to the same logical processor, the performance drop is tolerant like 
Xen's case today. But for KVM, it has to suffer from additional performance 
loss, since host's scheduler actively pulls these vcpu threads together.


Can you quantify this loss?  Give examples of what happens?


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to