Hi Lidong,

What do you mean by 'send the same traffic load between...' ? 

See if my understanding is correct:
You have two identical physical machines (CPU/Memory/NIC...), one(A) runs RHEL6 
Beta2(2.6.32-60) and the other one (B) runs RHEL6 (2.6.32-71).
Each machine booted 5 identical VMs and then VMs on machine A (pool A) paired 
up with VMs on machine B (pool B).  Sending packets between two VM pools 
yielded 20% utilization difference. 

Did you test bi-direction traffic, i.e. first pool A sends and pool B receives 
then vice versa?

Regards,

HUANG, Zhiteng



-----Original Message-----
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf Of 
lidong chen
Sent: Tuesday, November 23, 2010 10:14 AM
To: t...@kernel.org; s...@us.ibm.com; m...@redhat.com; Avi Kivity; 
kvm@vger.kernel.org
Subject: Performance test result between per-vhost kthread disable and enable

I test the performance between per-vhost kthread disable and enable.

Test method:
Send the same traffic load between per-vhost kthread disable and enable, and 
compare the cpu rate of host os.
I run five vm on kvm, each of them have five nic.
the vhost version which per-vhost kthread disable we used is rhel6 beta 
2(2.6.32.60).
the vhost version which per-vhost kthread enable we used is rhel6 (2.6.32-71).

Test result:
with per-vhost kthread disable, the cpu rate of host os is 110%.
with per-vhost kthread enable, the cpu rate of host os is 130%.

In 2.6.32.60,the whole system only have a kthread.
[r...@rhel6-kvm1 ~]# ps -ef | grep vhost
root       973     2  0 Nov22 ?        00:00:00 [vhost]

In 2.6.32.71,the whole system have 25 kthread.
[r...@kvm-4slot ~]# ps -ef | grep vhost-
root     12896     2  0 10:26 ?        00:00:00 [vhost-12842]
root     12897     2  0 10:26 ?        00:00:00 [vhost-12842]
root     12898     2  0 10:26 ?        00:00:00 [vhost-12842]
root     12899     2  0 10:26 ?        00:00:00 [vhost-12842]
root     12900     2  0 10:26 ?        00:00:00 [vhost-12842]

root     13022     2  0 10:26 ?        00:00:00 [vhost-12981]
root     13023     2  0 10:26 ?        00:00:00 [vhost-12981]
root     13024     2  0 10:26 ?        00:00:00 [vhost-12981]
root     13025     2  0 10:26 ?        00:00:00 [vhost-12981]
root     13026     2  0 10:26 ?        00:00:00 [vhost-12981]

root     13146     2  0 10:26 ?        00:00:00 [vhost-13088]
root     13147     2  0 10:26 ?        00:00:00 [vhost-13088]
root     13148     2  0 10:26 ?        00:00:00 [vhost-13088]
root     13149     2  0 10:26 ?        00:00:00 [vhost-13088]
root     13150     2  0 10:26 ?        00:00:00 [vhost-13088]
...

Code difference:
In 2.6.32.60,in function vhost_init, create the kthread for vhost.
vhost_workqueue = create_singlethread_workqueue("vhost");

In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for each nic 
interface.
dev->wq = create_singlethread_workqueue(vhost_name);

Conclusion:
with per-vhost kthread enable, the system can more throughput.
but deal the same traffic load with per-vhost kthread enable, it waste more cpu 
resource.

In my application scene, the cpu resource is more important, and one kthread 
for deal with traffic load is enough.

So i think we should add a param to control this.
for the CPU-bound system, this param disable per-vhost kthread.
for the I/O-bound system, this param enable per-vhost kthread.
the default value of this param is enable.

If my opinion is right, i will give a patch for this.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a 
message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to