Fw: Benchmarking for vhost polling patch

2015-01-01 Thread Razya Ladelsky
Hi Michael,
Just a follow up on the polling patch numbers,..
Please let me know if you find these numbers satisfying enough to continue 
with submitting this patch.
Otherwise - we'll have this patch submitted as part of the larger Elvis 
patch set rather than independently.
Thank you,
Razya 

- Forwarded by Razya Ladelsky/Haifa/IBM on 01/01/2015 09:37 AM -

From:   Razya Ladelsky/Haifa/IBM@IBMIL
To: m...@redhat.com
Cc: 
Date:   25/11/2014 02:43 PM
Subject:Re: Benchmarking for vhost polling patch
Sent by:kvm-ow...@vger.kernel.org



Hi Michael,

> Hi Razya,
> On the netperf benchmark, it looks like polling=10 gives a modest but
> measureable gain.  So from that perspective it might be worth it if it's
> not too much code, though we'll need to spend more time checking the
> macro effect - we barely moved the needle on the macro benchmark and
> that is suspicious.

I ran memcached with various values for the key & value arguments, and 
managed to see a bigger impact of polling than when I used the default 
values,
Here are the numbers:

key=250 TPS  netvhost vm   TPS/cpu  TPS/CPU
value=2048   rate   util  util  change

polling=0   101540   103.0  46   100   695.47
polling=5   136747   123.0  83   100   747.25   0.074440609
polling=7   140722   125.7  84   100   764.79   0.099663658
polling=10  141719   126.3  87   100   757.85   0.089688003
polling=15  142430   127.1  90   100   749.63   0.077863015
polling=25  146347   128.7  95   100   750.49   0.079107993
polling=50  150882   131.1  100  100   754.41   0.084733701

Macro benchmarks are less I/O intensive than the micro benchmark, which is 
why 
we can expect less impact for polling as compared to netperf. 
However, as shown above, we managed to get 10% TPS/CPU improvement with 
the 
polling patch.

> Is there a chance you are actually trading latency for throughput?
> do you observe any effect on latency?

No.

> How about trying some other benchmark, e.g. NFS?
> 

Tried, but didn't have enough I/O produced (vhost was at most at 15% util)

> 
> Also, I am wondering:
> 
> since vhost thread is polling in kernel anyway, shouldn't
> we try and poll the host NIC?
> that would likely reduce at least the latency significantly,
> won't it?
> 

Yes, it could be a great addition at some point, but needs a thorough 
investigation. In any case, not a part of this patch...

Thanks,
Razya

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PULL] vhost: cleanups and fixes

2015-01-01 Thread Michael S. Tsirkin
The following changes since commit b7392d2247cfe6771f95d256374f1a8e6a6f48d6:

  Linux 3.19-rc2 (2014-12-28 16:49:37 -0800)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost.git tags/for_linus

for you to fetch changes up to 5d9a07b0de512b77bf28d2401e5fe3351f00a240:

  vhost: relax used address alignment (2014-12-29 10:55:06 +0200)


vhost: virtio 1.0 bugfix

There's a single change here, fixing a vhost bug where vhost initialization
fails due to used ring alignment check being too strict.

Cc: Rusty Russell 
Signed-off-by: Michael S. Tsirkin 


Michael S. Tsirkin (2):
  virtio_ring: document alignment requirements
  vhost: relax used address alignment

 include/uapi/linux/virtio_ring.h |  7 +++
 drivers/vhost/vhost.c| 10 +++---
 2 files changed, 14 insertions(+), 3 deletions(-)
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html