Re: Performance test result between per-vhost kthread disable and enable

2010-12-09 Thread Michael S. Tsirkin
On Tue, Nov 23, 2010 at 10:13:43AM +0800, lidong chen wrote:
 I test the performance between per-vhost kthread disable and enable.
 
 Test method:
 Send the same traffic load between per-vhost kthread disable and
 enable, and compare the cpu rate of host os.
 I run five vm on kvm, each of them have five nic.
 the vhost version which per-vhost kthread disable we used is rhel6
 beta 2(2.6.32.60).
 the vhost version which per-vhost kthread enable we used is rhel6 (2.6.32-71).
 
 Test result:
 with per-vhost kthread disable, the cpu rate of host os is 110%.
 with per-vhost kthread enable, the cpu rate of host os is 130%.

Does it help if we schedule out the thread once we've passed
once over all vqs?

Something like this:

diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 1b0a20d..256e915 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -175,6 +175,7 @@ static int vhost_worker(void *data)
struct vhost_dev *dev = data;
struct vhost_work *work = NULL;
unsigned uninitialized_var(seq);
+   int n = 0;
 
use_mm(dev-mm);
 
@@ -206,9 +207,11 @@ static int vhost_worker(void *data)
if (work) {
__set_current_state(TASK_RUNNING);
work-fn(work);
-   } else
-   schedule();
-
+   if (likely(++n  dev-nvqs))
+   continue;
+   }
+   schedule();
+   n = 0;
}
unuse_mm(dev-mm);
return 0;
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-12-09 Thread Michael S. Tsirkin
On Thu, Dec 09, 2010 at 03:31:08PM +0200, Michael S. Tsirkin wrote:
 On Tue, Nov 23, 2010 at 10:13:43AM +0800, lidong chen wrote:
  I test the performance between per-vhost kthread disable and enable.
  
  Test method:
  Send the same traffic load between per-vhost kthread disable and
  enable, and compare the cpu rate of host os.
  I run five vm on kvm, each of them have five nic.
  the vhost version which per-vhost kthread disable we used is rhel6
  beta 2(2.6.32.60).
  the vhost version which per-vhost kthread enable we used is rhel6 
  (2.6.32-71).
  
  Test result:
  with per-vhost kthread disable, the cpu rate of host os is 110%.
  with per-vhost kthread enable, the cpu rate of host os is 130%.
 
 Does it help if we schedule out the thread once we've passed
 once over all vqs?

Also, could you please check whether applying
kvm: fast-path msi injection with irqfd
makes any difference?

That relieves the pressure on the scheduler by
sending the interrupt directly from vhost without
involving yet another thread.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-11-24 Thread Michael S. Tsirkin
On Wed, Nov 24, 2010 at 02:49:26PM +0800, lidong chen wrote:
 apply the patch, and disable CONFIG_SCHED_DEBUG, the result is worse,
 the cpu rate of host os is 143%.

Interesting. What does perf top show?

-- 
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-11-23 Thread Michael S. Tsirkin
On Tue, Nov 23, 2010 at 10:13:43AM +0800, lidong chen wrote:
 I test the performance between per-vhost kthread disable and enable.
 
 Test method:
 Send the same traffic load between per-vhost kthread disable and
 enable, and compare the cpu rate of host os.
 I run five vm on kvm, each of them have five nic.
 the vhost version which per-vhost kthread disable we used is rhel6
 beta 2(2.6.32.60).
 the vhost version which per-vhost kthread enable we used is rhel6 (2.6.32-71).

At this point, I'd suggest testing vhost-net on the upstream kernel,
not on rhel kernels. The change that introduced per-device threads is:
c23f3445e68e1db0e74099f264bc5ff5d55ebdeb

 Test result:
 with per-vhost kthread disable, the cpu rate of host os is 110%.
 with per-vhost kthread enable, the cpu rate of host os is 130%.

Is CONFIG_SCHED_DEBUG set? We are stressing the scheduler a lot with
vhost-net.

 In 2.6.32.60,the whole system only have a kthread.
 [r...@rhel6-kvm1 ~]# ps -ef | grep vhost
 root   973 2  0 Nov22 ?00:00:00 [vhost]
 
 In 2.6.32.71,the whole system have 25 kthread.
 [r...@kvm-4slot ~]# ps -ef | grep vhost-
 root 12896 2  0 10:26 ?00:00:00 [vhost-12842]
 root 12897 2  0 10:26 ?00:00:00 [vhost-12842]
 root 12898 2  0 10:26 ?00:00:00 [vhost-12842]
 root 12899 2  0 10:26 ?00:00:00 [vhost-12842]
 root 12900 2  0 10:26 ?00:00:00 [vhost-12842]
 
 root 13022 2  0 10:26 ?00:00:00 [vhost-12981]
 root 13023 2  0 10:26 ?00:00:00 [vhost-12981]
 root 13024 2  0 10:26 ?00:00:00 [vhost-12981]
 root 13025 2  0 10:26 ?00:00:00 [vhost-12981]
 root 13026 2  0 10:26 ?00:00:00 [vhost-12981]
 
 root 13146 2  0 10:26 ?00:00:00 [vhost-13088]
 root 13147 2  0 10:26 ?00:00:00 [vhost-13088]
 root 13148 2  0 10:26 ?00:00:00 [vhost-13088]
 root 13149 2  0 10:26 ?00:00:00 [vhost-13088]
 root 13150 2  0 10:26 ?00:00:00 [vhost-13088]
 ...
 
 Code difference:
 In 2.6.32.60,in function vhost_init, create the kthread for vhost.
 vhost_workqueue = create_singlethread_workqueue(vhost);
 
 In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for
 each nic interface.
 dev-wq = create_singlethread_workqueue(vhost_name);
 
 Conclusion:
 with per-vhost kthread enable, the system can more throughput.
 but deal the same traffic load with per-vhost kthread enable, it waste
 more cpu resource.
 
 In my application scene, the cpu resource is more important, and one
 kthread for deal with traffic load is enough.
 
 So i think we should add a param to control this.
 for the CPU-bound system, this param disable per-vhost kthread.
 for the I/O-bound system, this param enable per-vhost kthread.
 the default value of this param is enable.
 
 If my opinion is right, i will give a patch for this.

Let's try to figure out what the issue is, first.

-- 
MST
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-11-23 Thread lidong chen
At this point, I'd suggest testing vhost-net on the upstream kernel,
not on rhel kernels. The change that introduced per-device threads is:
c23f3445e68e1db0e74099f264bc5ff5d55ebdeb
i will try this tomorrow.

Is CONFIG_SCHED_DEBUG set?
yes. CONFIG_SCHED_DEBUG=y.

2010/11/23 Michael S. Tsirkin m...@redhat.com:
 On Tue, Nov 23, 2010 at 10:13:43AM +0800, lidong chen wrote:
 I test the performance between per-vhost kthread disable and enable.

 Test method:
 Send the same traffic load between per-vhost kthread disable and
 enable, and compare the cpu rate of host os.
 I run five vm on kvm, each of them have five nic.
 the vhost version which per-vhost kthread disable we used is rhel6
 beta 2(2.6.32.60).
 the vhost version which per-vhost kthread enable we used is rhel6 
 (2.6.32-71).

 At this point, I'd suggest testing vhost-net on the upstream kernel,
 not on rhel kernels. The change that introduced per-device threads is:
 c23f3445e68e1db0e74099f264bc5ff5d55ebdeb

 Test result:
 with per-vhost kthread disable, the cpu rate of host os is 110%.
 with per-vhost kthread enable, the cpu rate of host os is 130%.

 Is CONFIG_SCHED_DEBUG set? We are stressing the scheduler a lot with
 vhost-net.

 In 2.6.32.60,the whole system only have a kthread.
 [r...@rhel6-kvm1 ~]# ps -ef | grep vhost
 root       973     2  0 Nov22 ?        00:00:00 [vhost]

 In 2.6.32.71,the whole system have 25 kthread.
 [r...@kvm-4slot ~]# ps -ef | grep vhost-
 root     12896     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12897     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12898     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12899     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12900     2  0 10:26 ?        00:00:00 [vhost-12842]

 root     13022     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13023     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13024     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13025     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13026     2  0 10:26 ?        00:00:00 [vhost-12981]

 root     13146     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13147     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13148     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13149     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13150     2  0 10:26 ?        00:00:00 [vhost-13088]
 ...

 Code difference:
 In 2.6.32.60,in function vhost_init, create the kthread for vhost.
 vhost_workqueue = create_singlethread_workqueue(vhost);

 In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for
 each nic interface.
 dev-wq = create_singlethread_workqueue(vhost_name);

 Conclusion:
 with per-vhost kthread enable, the system can more throughput.
 but deal the same traffic load with per-vhost kthread enable, it waste
 more cpu resource.

 In my application scene, the cpu resource is more important, and one
 kthread for deal with traffic load is enough.

 So i think we should add a param to control this.
 for the CPU-bound system, this param disable per-vhost kthread.
 for the I/O-bound system, this param enable per-vhost kthread.
 the default value of this param is enable.

 If my opinion is right, i will give a patch for this.

 Let's try to figure out what the issue is, first.

 --
 MST

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-11-23 Thread Michael S. Tsirkin
On Tue, Nov 23, 2010 at 09:23:41PM +0800, lidong chen wrote:
 At this point, I'd suggest testing vhost-net on the upstream kernel,
 not on rhel kernels. The change that introduced per-device threads is:
 c23f3445e68e1db0e74099f264bc5ff5d55ebdeb
 i will try this tomorrow.
 
 Is CONFIG_SCHED_DEBUG set?
 yes. CONFIG_SCHED_DEBUG=y.

Disable it. Either debug scheduler or perf-test it :)

 2010/11/23 Michael S. Tsirkin m...@redhat.com:
  On Tue, Nov 23, 2010 at 10:13:43AM +0800, lidong chen wrote:
  I test the performance between per-vhost kthread disable and enable.
 
  Test method:
  Send the same traffic load between per-vhost kthread disable and
  enable, and compare the cpu rate of host os.
  I run five vm on kvm, each of them have five nic.
  the vhost version which per-vhost kthread disable we used is rhel6
  beta 2(2.6.32.60).
  the vhost version which per-vhost kthread enable we used is rhel6 
  (2.6.32-71).
 
  At this point, I'd suggest testing vhost-net on the upstream kernel,
  not on rhel kernels. The change that introduced per-device threads is:
  c23f3445e68e1db0e74099f264bc5ff5d55ebdeb
 
  Test result:
  with per-vhost kthread disable, the cpu rate of host os is 110%.
  with per-vhost kthread enable, the cpu rate of host os is 130%.
 
  Is CONFIG_SCHED_DEBUG set? We are stressing the scheduler a lot with
  vhost-net.
 
  In 2.6.32.60,the whole system only have a kthread.
  [r...@rhel6-kvm1 ~]# ps -ef | grep vhost
  root       973     2  0 Nov22 ?        00:00:00 [vhost]
 
  In 2.6.32.71,the whole system have 25 kthread.
  [r...@kvm-4slot ~]# ps -ef | grep vhost-
  root     12896     2  0 10:26 ?        00:00:00 [vhost-12842]
  root     12897     2  0 10:26 ?        00:00:00 [vhost-12842]
  root     12898     2  0 10:26 ?        00:00:00 [vhost-12842]
  root     12899     2  0 10:26 ?        00:00:00 [vhost-12842]
  root     12900     2  0 10:26 ?        00:00:00 [vhost-12842]
 
  root     13022     2  0 10:26 ?        00:00:00 [vhost-12981]
  root     13023     2  0 10:26 ?        00:00:00 [vhost-12981]
  root     13024     2  0 10:26 ?        00:00:00 [vhost-12981]
  root     13025     2  0 10:26 ?        00:00:00 [vhost-12981]
  root     13026     2  0 10:26 ?        00:00:00 [vhost-12981]
 
  root     13146     2  0 10:26 ?        00:00:00 [vhost-13088]
  root     13147     2  0 10:26 ?        00:00:00 [vhost-13088]
  root     13148     2  0 10:26 ?        00:00:00 [vhost-13088]
  root     13149     2  0 10:26 ?        00:00:00 [vhost-13088]
  root     13150     2  0 10:26 ?        00:00:00 [vhost-13088]
  ...
 
  Code difference:
  In 2.6.32.60,in function vhost_init, create the kthread for vhost.
  vhost_workqueue = create_singlethread_workqueue(vhost);
 
  In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for
  each nic interface.
  dev-wq = create_singlethread_workqueue(vhost_name);
 
  Conclusion:
  with per-vhost kthread enable, the system can more throughput.
  but deal the same traffic load with per-vhost kthread enable, it waste
  more cpu resource.
 
  In my application scene, the cpu resource is more important, and one
  kthread for deal with traffic load is enough.
 
  So i think we should add a param to control this.
  for the CPU-bound system, this param disable per-vhost kthread.
  for the I/O-bound system, this param enable per-vhost kthread.
  the default value of this param is enable.
 
  If my opinion is right, i will give a patch for this.
 
  Let's try to figure out what the issue is, first.
 
  --
  MST
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-11-23 Thread Sridhar Samudrala

On 11/23/2010 5:41 AM, Michael S. Tsirkin wrote:

On Tue, Nov 23, 2010 at 09:23:41PM +0800, lidong chen wrote:

At this point, I'd suggest testing vhost-net on the upstream kernel,
not on rhel kernels. The change that introduced per-device threads is:
c23f3445e68e1db0e74099f264bc5ff5d55ebdeb
i will try this tomorrow.

Is CONFIG_SCHED_DEBUG set?
yes. CONFIG_SCHED_DEBUG=y.

Disable it. Either debug scheduler or perf-test it :)

Another debug option  to disable is CONFIG_WORKQUEUE_TRACER if it is set
when using old rhel6 kernels.

-Sridhar


2010/11/23 Michael S. Tsirkinm...@redhat.com:

On Tue, Nov 23, 2010 at 10:13:43AM +0800, lidong chen wrote:

I test the performance between per-vhost kthread disable and enable.

Test method:
Send the same traffic load between per-vhost kthread disable and
enable, and compare the cpu rate of host os.
I run five vm on kvm, each of them have five nic.
the vhost version which per-vhost kthread disable we used is rhel6
beta 2(2.6.32.60).
the vhost version which per-vhost kthread enable we used is rhel6 (2.6.32-71).

At this point, I'd suggest testing vhost-net on the upstream kernel,
not on rhel kernels. The change that introduced per-device threads is:
c23f3445e68e1db0e74099f264bc5ff5d55ebdeb


Test result:
with per-vhost kthread disable, the cpu rate of host os is 110%.
with per-vhost kthread enable, the cpu rate of host os is 130%.

Is CONFIG_SCHED_DEBUG set? We are stressing the scheduler a lot with
vhost-net.


In 2.6.32.60,the whole system only have a kthread.
[r...@rhel6-kvm1 ~]# ps -ef | grep vhost
root   973 2  0 Nov22 ?00:00:00 [vhost]

In 2.6.32.71,the whole system have 25 kthread.
[r...@kvm-4slot ~]# ps -ef | grep vhost-
root 12896 2  0 10:26 ?00:00:00 [vhost-12842]
root 12897 2  0 10:26 ?00:00:00 [vhost-12842]
root 12898 2  0 10:26 ?00:00:00 [vhost-12842]
root 12899 2  0 10:26 ?00:00:00 [vhost-12842]
root 12900 2  0 10:26 ?00:00:00 [vhost-12842]

root 13022 2  0 10:26 ?00:00:00 [vhost-12981]
root 13023 2  0 10:26 ?00:00:00 [vhost-12981]
root 13024 2  0 10:26 ?00:00:00 [vhost-12981]
root 13025 2  0 10:26 ?00:00:00 [vhost-12981]
root 13026 2  0 10:26 ?00:00:00 [vhost-12981]

root 13146 2  0 10:26 ?00:00:00 [vhost-13088]
root 13147 2  0 10:26 ?00:00:00 [vhost-13088]
root 13148 2  0 10:26 ?00:00:00 [vhost-13088]
root 13149 2  0 10:26 ?00:00:00 [vhost-13088]
root 13150 2  0 10:26 ?00:00:00 [vhost-13088]
...

Code difference:
In 2.6.32.60,in function vhost_init, create the kthread for vhost.
vhost_workqueue = create_singlethread_workqueue(vhost);

In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for
each nic interface.
dev-wq = create_singlethread_workqueue(vhost_name);

Conclusion:
with per-vhost kthread enable, the system can more throughput.
but deal the same traffic load with per-vhost kthread enable, it waste
more cpu resource.

In my application scene, the cpu resource is more important, and one
kthread for deal with traffic load is enough.

So i think we should add a param to control this.
for the CPU-bound system, this param disable per-vhost kthread.
for the I/O-bound system, this param enable per-vhost kthread.
the default value of this param is enable.

If my opinion is right, i will give a patch for this.

Let's try to figure out what the issue is, first.

--
MST




--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Performance test result between per-vhost kthread disable and enable

2010-11-22 Thread Huang, Zhiteng
Hi Lidong,

What do you mean by 'send the same traffic load between...' ? 

See if my understanding is correct:
You have two identical physical machines (CPU/Memory/NIC...), one(A) runs RHEL6 
Beta2(2.6.32-60) and the other one (B) runs RHEL6 (2.6.32-71).
Each machine booted 5 identical VMs and then VMs on machine A (pool A) paired 
up with VMs on machine B (pool B).  Sending packets between two VM pools 
yielded 20% utilization difference. 

Did you test bi-direction traffic, i.e. first pool A sends and pool B receives 
then vice versa?

Regards,

HUANG, Zhiteng



-Original Message-
From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf Of 
lidong chen
Sent: Tuesday, November 23, 2010 10:14 AM
To: t...@kernel.org; s...@us.ibm.com; m...@redhat.com; Avi Kivity; 
kvm@vger.kernel.org
Subject: Performance test result between per-vhost kthread disable and enable

I test the performance between per-vhost kthread disable and enable.

Test method:
Send the same traffic load between per-vhost kthread disable and enable, and 
compare the cpu rate of host os.
I run five vm on kvm, each of them have five nic.
the vhost version which per-vhost kthread disable we used is rhel6 beta 
2(2.6.32.60).
the vhost version which per-vhost kthread enable we used is rhel6 (2.6.32-71).

Test result:
with per-vhost kthread disable, the cpu rate of host os is 110%.
with per-vhost kthread enable, the cpu rate of host os is 130%.

In 2.6.32.60,the whole system only have a kthread.
[r...@rhel6-kvm1 ~]# ps -ef | grep vhost
root   973 2  0 Nov22 ?00:00:00 [vhost]

In 2.6.32.71,the whole system have 25 kthread.
[r...@kvm-4slot ~]# ps -ef | grep vhost-
root 12896 2  0 10:26 ?00:00:00 [vhost-12842]
root 12897 2  0 10:26 ?00:00:00 [vhost-12842]
root 12898 2  0 10:26 ?00:00:00 [vhost-12842]
root 12899 2  0 10:26 ?00:00:00 [vhost-12842]
root 12900 2  0 10:26 ?00:00:00 [vhost-12842]

root 13022 2  0 10:26 ?00:00:00 [vhost-12981]
root 13023 2  0 10:26 ?00:00:00 [vhost-12981]
root 13024 2  0 10:26 ?00:00:00 [vhost-12981]
root 13025 2  0 10:26 ?00:00:00 [vhost-12981]
root 13026 2  0 10:26 ?00:00:00 [vhost-12981]

root 13146 2  0 10:26 ?00:00:00 [vhost-13088]
root 13147 2  0 10:26 ?00:00:00 [vhost-13088]
root 13148 2  0 10:26 ?00:00:00 [vhost-13088]
root 13149 2  0 10:26 ?00:00:00 [vhost-13088]
root 13150 2  0 10:26 ?00:00:00 [vhost-13088]
...

Code difference:
In 2.6.32.60,in function vhost_init, create the kthread for vhost.
vhost_workqueue = create_singlethread_workqueue(vhost);

In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for each nic 
interface.
dev-wq = create_singlethread_workqueue(vhost_name);

Conclusion:
with per-vhost kthread enable, the system can more throughput.
but deal the same traffic load with per-vhost kthread enable, it waste more cpu 
resource.

In my application scene, the cpu resource is more important, and one kthread 
for deal with traffic load is enough.

So i think we should add a param to control this.
for the CPU-bound system, this param disable per-vhost kthread.
for the I/O-bound system, this param enable per-vhost kthread.
the default value of this param is enable.

If my opinion is right, i will give a patch for this.
--
To unsubscribe from this list: send the line unsubscribe kvm in the body of a 
message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-11-22 Thread lidong chen
I used a special tool, this tool can send and receive packets parallelly.
I set the tool to use the same traffic load.
then i use the tool to test different version of kvm.



2010/11/23 Huang, Zhiteng zhiteng.hu...@intel.com:
 Hi Lidong,

 What do you mean by 'send the same traffic load between...' ?

 See if my understanding is correct:
 You have two identical physical machines (CPU/Memory/NIC...), one(A) runs 
 RHEL6 Beta2(2.6.32-60) and the other one (B) runs RHEL6 (2.6.32-71).
 Each machine booted 5 identical VMs and then VMs on machine A (pool A) paired 
 up with VMs on machine B (pool B).  Sending packets between two VM pools 
 yielded 20% utilization difference.

 Did you test bi-direction traffic, i.e. first pool A sends and pool B 
 receives then vice versa?

 Regards,

 HUANG, Zhiteng



 -Original Message-
 From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On Behalf 
 Of lidong chen
 Sent: Tuesday, November 23, 2010 10:14 AM
 To: t...@kernel.org; s...@us.ibm.com; m...@redhat.com; Avi Kivity; 
 kvm@vger.kernel.org
 Subject: Performance test result between per-vhost kthread disable and enable

 I test the performance between per-vhost kthread disable and enable.

 Test method:
 Send the same traffic load between per-vhost kthread disable and enable, and 
 compare the cpu rate of host os.
 I run five vm on kvm, each of them have five nic.
 the vhost version which per-vhost kthread disable we used is rhel6 beta 
 2(2.6.32.60).
 the vhost version which per-vhost kthread enable we used is rhel6 (2.6.32-71).

 Test result:
 with per-vhost kthread disable, the cpu rate of host os is 110%.
 with per-vhost kthread enable, the cpu rate of host os is 130%.

 In 2.6.32.60,the whole system only have a kthread.
 [r...@rhel6-kvm1 ~]# ps -ef | grep vhost
 root       973     2  0 Nov22 ?        00:00:00 [vhost]

 In 2.6.32.71,the whole system have 25 kthread.
 [r...@kvm-4slot ~]# ps -ef | grep vhost-
 root     12896     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12897     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12898     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12899     2  0 10:26 ?        00:00:00 [vhost-12842]
 root     12900     2  0 10:26 ?        00:00:00 [vhost-12842]

 root     13022     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13023     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13024     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13025     2  0 10:26 ?        00:00:00 [vhost-12981]
 root     13026     2  0 10:26 ?        00:00:00 [vhost-12981]

 root     13146     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13147     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13148     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13149     2  0 10:26 ?        00:00:00 [vhost-13088]
 root     13150     2  0 10:26 ?        00:00:00 [vhost-13088]
 ...

 Code difference:
 In 2.6.32.60,in function vhost_init, create the kthread for vhost.
 vhost_workqueue = create_singlethread_workqueue(vhost);

 In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for each nic 
 interface.
 dev-wq = create_singlethread_workqueue(vhost_name);

 Conclusion:
 with per-vhost kthread enable, the system can more throughput.
 but deal the same traffic load with per-vhost kthread enable, it waste more 
 cpu resource.

 In my application scene, the cpu resource is more important, and one kthread 
 for deal with traffic load is enough.

 So i think we should add a param to control this.
 for the CPU-bound system, this param disable per-vhost kthread.
 for the I/O-bound system, this param enable per-vhost kthread.
 the default value of this param is enable.

 If my opinion is right, i will give a patch for this.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in the body of 
 a message to majord...@vger.kernel.org More majordomo info at  
 http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Performance test result between per-vhost kthread disable and enable

2010-11-22 Thread Huang, Zhiteng
By same traffic load, do you mean same amount of packets or traffic with same 
bandwidth or anything else?

Regards,

HUANG, Zhiteng



-Original Message-
From: lidong chen [mailto:chen.lidong.ker...@gmail.com] 
Sent: Tuesday, November 23, 2010 2:53 PM
To: Huang, Zhiteng
Cc: t...@kernel.org; s...@us.ibm.com; m...@redhat.com; Avi Kivity; 
kvm@vger.kernel.org
Subject: Re: Performance test result between per-vhost kthread disable and 
enable

I used a special tool, this tool can send and receive packets parallelly.
I set the tool to use the same traffic load.
then i use the tool to test different version of kvm.



2010/11/23 Huang, Zhiteng zhiteng.hu...@intel.com:
 Hi Lidong,

 What do you mean by 'send the same traffic load between...' ?

 See if my understanding is correct:
 You have two identical physical machines (CPU/Memory/NIC...), one(A) runs 
 RHEL6 Beta2(2.6.32-60) and the other one (B) runs RHEL6 (2.6.32-71).
 Each machine booted 5 identical VMs and then VMs on machine A (pool A) paired 
 up with VMs on machine B (pool B).  Sending packets between two VM pools 
 yielded 20% utilization difference.

 Did you test bi-direction traffic, i.e. first pool A sends and pool B 
 receives then vice versa?

 Regards,

 HUANG, Zhiteng



 -Original Message-
 From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On 
 Behalf Of lidong chen
 Sent: Tuesday, November 23, 2010 10:14 AM
 To: t...@kernel.org; s...@us.ibm.com; m...@redhat.com; Avi Kivity; 
 kvm@vger.kernel.org
 Subject: Performance test result between per-vhost kthread disable and 
 enable

 I test the performance between per-vhost kthread disable and enable.

 Test method:
 Send the same traffic load between per-vhost kthread disable and enable, and 
 compare the cpu rate of host os.
 I run five vm on kvm, each of them have five nic.
 the vhost version which per-vhost kthread disable we used is rhel6 beta 
 2(2.6.32.60).
 the vhost version which per-vhost kthread enable we used is rhel6 (2.6.32-71).

 Test result:
 with per-vhost kthread disable, the cpu rate of host os is 110%.
 with per-vhost kthread enable, the cpu rate of host os is 130%.

 In 2.6.32.60,the whole system only have a kthread.
 [r...@rhel6-kvm1 ~]# ps -ef | grep vhost root       973     2  0 Nov22 
 ?        00:00:00 [vhost]

 In 2.6.32.71,the whole system have 25 kthread.
 [r...@kvm-4slot ~]# ps -ef | grep vhost- root     12896     2  0 10:26 
 ?        00:00:00 [vhost-12842] root     12897     2  0 10:26 ?        
 00:00:00 [vhost-12842] root     12898     2  0 10:26 ?        00:00:00 
 [vhost-12842] root     12899     2  0 10:26 ?        00:00:00 
 [vhost-12842] root     12900     2  0 10:26 ?        00:00:00 
 [vhost-12842]

 root     13022     2  0 10:26 ?        00:00:00 [vhost-12981] root     
 13023     2  0 10:26 ?        00:00:00 [vhost-12981] root     13024     
 2  0 10:26 ?        00:00:00 [vhost-12981] root     13025     2  0 
 10:26 ?        00:00:00 [vhost-12981] root     13026     2  0 10:26 ?        
 00:00:00 [vhost-12981]

 root     13146     2  0 10:26 ?        00:00:00 [vhost-13088] root     
 13147     2  0 10:26 ?        00:00:00 [vhost-13088] root     13148     
 2  0 10:26 ?        00:00:00 [vhost-13088] root     13149     2  0 
 10:26 ?        00:00:00 [vhost-13088] root     13150     2  0 10:26 ?        
 00:00:00 [vhost-13088] ...

 Code difference:
 In 2.6.32.60,in function vhost_init, create the kthread for vhost.
 vhost_workqueue = create_singlethread_workqueue(vhost);

 In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for each nic 
 interface.
 dev-wq = create_singlethread_workqueue(vhost_name);

 Conclusion:
 with per-vhost kthread enable, the system can more throughput.
 but deal the same traffic load with per-vhost kthread enable, it waste more 
 cpu resource.

 In my application scene, the cpu resource is more important, and one kthread 
 for deal with traffic load is enough.

 So i think we should add a param to control this.
 for the CPU-bound system, this param disable per-vhost kthread.
 for the I/O-bound system, this param enable per-vhost kthread.
 the default value of this param is enable.

 If my opinion is right, i will give a patch for this.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in the 
 body of a message to majord...@vger.kernel.org More majordomo info at  
 http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance test result between per-vhost kthread disable and enable

2010-11-22 Thread lidong chen
traffic with same bandwidth.
for example,both 1000 packets per second.


2010/11/23 Huang, Zhiteng zhiteng.hu...@intel.com:
 By same traffic load, do you mean same amount of packets or traffic with same 
 bandwidth or anything else?

 Regards,

 HUANG, Zhiteng



 -Original Message-
 From: lidong chen [mailto:chen.lidong.ker...@gmail.com]
 Sent: Tuesday, November 23, 2010 2:53 PM
 To: Huang, Zhiteng
 Cc: t...@kernel.org; s...@us.ibm.com; m...@redhat.com; Avi Kivity; 
 kvm@vger.kernel.org
 Subject: Re: Performance test result between per-vhost kthread disable and 
 enable

 I used a special tool, this tool can send and receive packets parallelly.
 I set the tool to use the same traffic load.
 then i use the tool to test different version of kvm.



 2010/11/23 Huang, Zhiteng zhiteng.hu...@intel.com:
 Hi Lidong,

 What do you mean by 'send the same traffic load between...' ?

 See if my understanding is correct:
 You have two identical physical machines (CPU/Memory/NIC...), one(A) runs 
 RHEL6 Beta2(2.6.32-60) and the other one (B) runs RHEL6 (2.6.32-71).
 Each machine booted 5 identical VMs and then VMs on machine A (pool A) 
 paired up with VMs on machine B (pool B).  Sending packets between two VM 
 pools yielded 20% utilization difference.

 Did you test bi-direction traffic, i.e. first pool A sends and pool B 
 receives then vice versa?

 Regards,

 HUANG, Zhiteng



 -Original Message-
 From: kvm-ow...@vger.kernel.org [mailto:kvm-ow...@vger.kernel.org] On
 Behalf Of lidong chen
 Sent: Tuesday, November 23, 2010 10:14 AM
 To: t...@kernel.org; s...@us.ibm.com; m...@redhat.com; Avi Kivity;
 kvm@vger.kernel.org
 Subject: Performance test result between per-vhost kthread disable and
 enable

 I test the performance between per-vhost kthread disable and enable.

 Test method:
 Send the same traffic load between per-vhost kthread disable and enable, and 
 compare the cpu rate of host os.
 I run five vm on kvm, each of them have five nic.
 the vhost version which per-vhost kthread disable we used is rhel6 beta 
 2(2.6.32.60).
 the vhost version which per-vhost kthread enable we used is rhel6 
 (2.6.32-71).

 Test result:
 with per-vhost kthread disable, the cpu rate of host os is 110%.
 with per-vhost kthread enable, the cpu rate of host os is 130%.

 In 2.6.32.60,the whole system only have a kthread.
 [r...@rhel6-kvm1 ~]# ps -ef | grep vhost root       973     2  0 Nov22
 ?        00:00:00 [vhost]

 In 2.6.32.71,the whole system have 25 kthread.
 [r...@kvm-4slot ~]# ps -ef | grep vhost- root     12896     2  0 10:26
 ?        00:00:00 [vhost-12842] root     12897     2  0 10:26 ?
 00:00:00 [vhost-12842] root     12898     2  0 10:26 ?        00:00:00
 [vhost-12842] root     12899     2  0 10:26 ?        00:00:00
 [vhost-12842] root     12900     2  0 10:26 ?        00:00:00
 [vhost-12842]

 root     13022     2  0 10:26 ?        00:00:00 [vhost-12981] root
 13023     2  0 10:26 ?        00:00:00 [vhost-12981] root     13024
 2  0 10:26 ?        00:00:00 [vhost-12981] root     13025     2  0
 10:26 ?        00:00:00 [vhost-12981] root     13026     2  0 10:26 ?
 00:00:00 [vhost-12981]

 root     13146     2  0 10:26 ?        00:00:00 [vhost-13088] root
 13147     2  0 10:26 ?        00:00:00 [vhost-13088] root     13148
 2  0 10:26 ?        00:00:00 [vhost-13088] root     13149     2  0
 10:26 ?        00:00:00 [vhost-13088] root     13150     2  0 10:26 ?
 00:00:00 [vhost-13088] ...

 Code difference:
 In 2.6.32.60,in function vhost_init, create the kthread for vhost.
 vhost_workqueue = create_singlethread_workqueue(vhost);

 In 2.6.32.71,in function vhost_dev_set_owner, create the kthread for each 
 nic interface.
 dev-wq = create_singlethread_workqueue(vhost_name);

 Conclusion:
 with per-vhost kthread enable, the system can more throughput.
 but deal the same traffic load with per-vhost kthread enable, it waste more 
 cpu resource.

 In my application scene, the cpu resource is more important, and one kthread 
 for deal with traffic load is enough.

 So i think we should add a param to control this.
 for the CPU-bound system, this param disable per-vhost kthread.
 for the I/O-bound system, this param enable per-vhost kthread.
 the default value of this param is enable.

 If my opinion is right, i will give a patch for this.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in the
 body of a message to majord...@vger.kernel.org More majordomo info at  
 http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html