lat_rpc performance issue in kvm?

2014-05-03 Thread Xuekun Hu
Hi, All

I’m using lat_rpc (one workload in LMBench) to measure the
inter-process communication latency between two processes
(client/server program). In linux guest in KVM, if binding the client
and server apps to separate cores, the latency is much worse than that
binding the client and server apps in same core. The number of events
to cause vm-exit is roughly same in the two test cases, which seems
like the performance down is not caused by interaction with VMM. While
in host, the latency in the two cases are not that big difference.
I used isolcpus boot option for both host and guest, and pin each
vcpu to each pcpu which belong to the same socket.

The data is listed below. Does anyone have any idea why?

LMbench (taskset -c 2 ./lat_rpc -s localhost)

   host  vm
taskset -c 2 ./lat_rpc -p tcp localhost (binding same core)
19ms18ms
taskset -c 1 ./lat_rpc -p tcp localhost (binding different core)
 21ms48ms

The system is Intel Sandy Bridge processor with
3.11.10-301.fc20.x86_64 linux kernel.

Really appreciated if any suggestions/comments.

Thx, Xuekun



Call
Send SMS
Add to Skype
You'll need Skype CreditFree via Skype
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio + vhost-net performance issue - preadv ?

2012-12-07 Thread David Cruz
So, far. I gave another try to this.

After correcting permissions...

When you create a VM (using qemu-kvm 1.1 or 1.2, with a modern
libvirtd ) you get this:

 qemu-kvm: virtio_pci_set_host_notifier_internal: unable to init event notifier:
vhost VQ 0 notifier binding failed: 38
qemu-kvm: unable to start vhost net: 38: falling back on userspace virtio


Seems related to ioeventfd  present in Redhat 6.1 but not in Redhat 5.X.

Using the elrepo kernel with vhost_net module for these tests.

So event if you change the network driver to vhost in the XML it
falls back to qemu.


Disabling ioeventfd=off in the network XML block has no effect.

And, that's all for these tests.

David

2012/11/14 Ben Clay rbc...@ncsu.edu:
 I have a working copy of libvirt 0.10.2 + qemu 1.2 installed on a vanilla
 up-to-date (2.6.32-279.9.1) CentOS 6 host, and get very good VM - VM
 network performance (both running on the same host) using virtio.  I have
 cgroups set to cap the VMs at 10Gbps and iperf shows I'm getting exactly
 10Gbps.

 I copied these VMs to a CentOS 5 host and installed libvirt 1.0 + qemu 1.2.
 However, the best performance I can get in between the VMs (again running on
 the same host) is ~2Gbps.  In both cases, this is over a bridged interface
 with static IPs assigned to each VM.  I've also tried virtual networking
 with NAT or routing, yielding the same results.

 I figured it was due to vhost-net missing on the older CentOS 5 kernel, so I
 installed 2.6.39-4.2 from ELRepo and got the /dev/vhost-net device and vhost
 processes associated with each VM:

 ]$ lsmod | grep vhost
 vhost_net  28446  2
 tun23888  7 vhost_net

 ]$ ps aux | grep vhost-
 root  9628  0.0  0.0  0 0 ?S17:57   0:00
 [vhost-9626]
 root  9671  0.0  0.0  0 0 ?S17:57   0:00
 [vhost-9670]

 ]$ ls /dev/vhost-net -al
 crw--- 1 root root 10, 58 Nov 13 15:19 /dev/vhost-net

 After installing the new kernel, I also tried rebuilding libvirt and qemu,
 to no avail.  I also disabled cgroups, just in case it was getting in the
 way, as well as iptables.  I can see the virtio_net module loaded inside the
 guest, and using virtio raises my performance from 400Mbps to 2Gbps, so it
 does make some improvement.

 The only differences between the two physical hosts that I can find are:

 - qemu on the CentOS 5 host builds without preadv support - would this make
 such a huge performance difference?  CentOS5 only comes with an old version
 of glibc, which is missing preadv
 - qemu on the CentOS 5 host builds without PIE
 - libvirt 1.0 was required on the CentOS 5 host, since 0.10.2 had a build
 bug. This shouldn't matter I don't think.
 - I haven't tried rebuilding the VMs from scratch on the CentOS5 host, which
 I guess is worth a try.

 The qemu process is being started with virtio + vhost:

 /usr/bin/qemu-system-x86_64 -name vmname -S -M pc-1.2 -enable-kvm -m 4096
 -smp 8,sockets=8,cores=1,threads=1 -uuid
 212915ed-a34a-4d6d-68f5-2216083a7693 -no-user-config -nodefaults -chardev
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname.monitor,server,nowai
 t -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
 -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
 file=/mnt/vmname/disk.img,if=none,id=drive-virtio-disk0,format=raw,cache=non
 e -device
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virti
 o-disk0,bootindex=1 -netdev tap,fd=16,id=hostnet0,vhost=on,vhostfd=18
 -device
 virtio-net-pci,netdev=hostnet0,id=net0,mac=00:11:22:33:44:55,bus=pci.0,addr=
 0x3 -chardev pty,id=charserial0 -device
 isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc
 127.0.0.1:1 -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

 The relevant part of my libvirt config, of which I've tried omitting the
 target, alias and address elements with no difference in performance:

interface type=bridge
   mac address=00:11:22:33:44:55/
   source bridge=br0/
   target dev=vnet0/
   model type=virtio/
   alias name=net0/
   address type=pci domain=0x bus=0x00 slot=0x03
 function=0x0/
 /interface

 Is there something else which could be getting in the way here?

 Thanks!

 Ben Clay
 rbc...@ncsu.edu



 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-29 Thread Vadim Rozenfeld
On Wednesday, November 28, 2012 09:09:29 PM George-Cristian Bîrzan wrote:
 On Wed, Nov 28, 2012 at 1:39 PM, Vadim Rozenfeld vroze...@redhat.com 
wrote:
  On Tuesday, November 27, 2012 11:13:12 PM George-Cristian Bîrzan wrote:
  On Tue, Nov 27, 2012 at 10:38 PM, Vadim Rozenfeld vroze...@redhat.com
  
  wrote:
   I have some code which do both reference time and invariant TSC but it
   will not work after migration. I will send it later today.
  
  Do you mean migrating guests? This is not an issue for us.
  
  OK, but don't say I didn't warn you :)
  
  There are two patches, one for kvm and another one for qemu.
  you will probably need to rebase them.
  Add hv_tsc cpu parameter to activate this feature.
  you will probably need to deactivate hpet by adding -no-hpet
  parameter as well.
 
 I've also added +hv_relaxed since then, but this is the command I'm

I would suggest activating relaxed timing for all W2K8R2/Win7 guests.

 using now and there's no change:
 
 /usr/bin/qemu-kvm -name b691546e-79f8-49c6-a293-81067503a6ad -S -M
 pc-1.2 -enable-kvm -m 16384 -smp 9,sockets=1,cores=9,threads=1 -uuid
 b691546e-79f8-49c6-a293-81067503a6ad -no-user-config -nodefaults
 -chardev
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/b691546e-79f8-49c6-a293-8
 1067503a6ad.monitor,server,nowait -mon
 chardev=charmonitor,id=monitor,mode=control -rtc base=utc
 -no-hpet -no-shutdown -device
 piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
 file=/var/lib/libvirt/images/dis-magnetics-2-223101/d8b233c6-8424-4de9-ae3c
 -7c9a60288514,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback,ai
 o=native -device
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=vir
 tio-disk0,bootindex=1 -netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36
 -device
 virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2e:fb:a2:36:be,bus=pci.0,addr
 =0x3 -netdev tap,fd=40,id=hostnet1,vhost=on,vhostfd=41 -device
 virtio-net-pci,netdev=hostnet1,id=net1,mac=22:94:44:5a:cb:24,bus=pci.0,addr
 =0x4 -vnc 127.0.0.1:0,password -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -cpu host,hv_tsc
 
 I compiled qemu-1.2.0-24 after applying your patch, used the head for
 KVM, and I see no difference. I've tried setting windows'
 useplatformclock on and off, no change either.
 
 
 Other than that, was looking into a profiling trace of the software
 running and a lot of time (60%?) is spent calling two functions from
 hal.dll, HalpGetPmTimerSleepModePerfCounter when I disable HPET, and
 HalpHPETProgramRolloverTimer which do point at something related to
 the timers.
 
It means that hyper-v time stamp source was not activated.
 Any other thing I can try?
 
 
 --
 George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-29 Thread George-Cristian Bîrzan
On Thu, Nov 29, 2012 at 1:56 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
 I've also added +hv_relaxed since then, but this is the command I'm

 I would suggest activating relaxed timing for all W2K8R2/Win7 guests.

Is there any place I can read up on the downsides of this for Linux,
or is Just Better?

 Other than that, was looking into a profiling trace of the software
 running and a lot of time (60%?) is spent calling two functions from
 hal.dll, HalpGetPmTimerSleepModePerfCounter when I disable HPET, and
 HalpHPETProgramRolloverTimer which do point at something related to
 the timers.

 It means that hyper-v time stamp source was not activated.

I recompiled the whole kernel, with your patch, and while I cannot
check at 70Mbps now, a test stream of 20 seems to do better. Also, now
I don't see any of those functions, which used to account ~60% of the
time spent by the program. I'm waiting for the customer to come back
and start the 'real' stream, but from my tests, time spent in hal.dll
is now an order of magnitude smaller.

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-29 Thread Gleb Natapov
On Thu, Nov 29, 2012 at 03:45:52PM +0200, George-Cristian Bîrzan wrote:
 On Thu, Nov 29, 2012 at 1:56 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
  I've also added +hv_relaxed since then, but this is the command I'm
 
  I would suggest activating relaxed timing for all W2K8R2/Win7 guests.
 
 Is there any place I can read up on the downsides of this for Linux,
 or is Just Better?
 
You shouldn't use hyper-v flags for Linux guests. In theory Linux should
just ignore them, in practice there may be bugs that will prevent Linux
from detecting that it runs as a guest and disable optimizations.

  Other than that, was looking into a profiling trace of the software
  running and a lot of time (60%?) is spent calling two functions from
  hal.dll, HalpGetPmTimerSleepModePerfCounter when I disable HPET, and
  HalpHPETProgramRolloverTimer which do point at something related to
  the timers.
 
  It means that hyper-v time stamp source was not activated.
 
 I recompiled the whole kernel, with your patch, and while I cannot
 check at 70Mbps now, a test stream of 20 seems to do better. Also, now
 I don't see any of those functions, which used to account ~60% of the
 time spent by the program. I'm waiting for the customer to come back
 and start the 'real' stream, but from my tests, time spent in hal.dll
 is now an order of magnitude smaller.
 
 --
 George-Cristian Bîrzan

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-29 Thread Vadim Rozenfeld
On Thursday, November 29, 2012 03:56:10 PM Gleb Natapov wrote:
 On Thu, Nov 29, 2012 at 03:45:52PM +0200, George-Cristian Bîrzan wrote:
  On Thu, Nov 29, 2012 at 1:56 PM, Vadim Rozenfeld vroze...@redhat.com 
wrote:
   I've also added +hv_relaxed since then, but this is the command I'm
   
   I would suggest activating relaxed timing for all W2K8R2/Win7 guests.
  
  Is there any place I can read up on the downsides of this for Linux,
  or is Just Better?
 
 You shouldn't use hyper-v flags for Linux guests. In theory Linux should
 just ignore them, in practice there may be bugs that will prevent Linux
 from detecting that it runs as a guest and disable optimizations.
 
As Gleb said, hyper-v flag are relevant to the Windows guests only. 
IIRC spinlocks and vapic should work for Vista and higher. Relaxed timing and
partition reference time work for Win7/W2K8R2.
   Other than that, was looking into a profiling trace of the software
   
   running and a lot of time (60%?) is spent calling two functions from
   hal.dll, HalpGetPmTimerSleepModePerfCounter when I disable HPET, and
   HalpHPETProgramRolloverTimer which do point at something related to
   the timers.
   
   It means that hyper-v time stamp source was not activated.
  
  I recompiled the whole kernel, with your patch, and while I cannot
  check at 70Mbps now, a test stream of 20 seems to do better. Also, now
  I don't see any of those functions, which used to account ~60% of the
  time spent by the program. I'm waiting for the customer to come back
  and start the 'real' stream, but from my tests, time spent in hal.dll
  is now an order of magnitude smaller.
  
  --
  George-Cristian Bîrzan
 
 --
   Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-28 Thread Vadim Rozenfeld
On Tuesday, November 27, 2012 11:13:12 PM George-Cristian Bîrzan wrote:
 On Tue, Nov 27, 2012 at 10:38 PM, Vadim Rozenfeld vroze...@redhat.com 
wrote:
  I have some code which do both reference time and invariant TSC but it
  will not work after migration. I will send it later today.
 
 Do you mean migrating guests? This is not an issue for us.
OK, but don't say I didn't warn you :)

There are two patches, one for kvm and another one for qemu.
you will probably need to rebase them.
Add hv_tsc cpu parameter to activate this feature.
you will probably need to deactivate hpet by adding -no-hpet
parameter as well.

best regards,
Vadim.

 
 Also, it would be much appreciated!
 
 --
 George-Cristian Bîrzan
diff --git a/arch/x86/include/asm/hyperv.h b/arch/x86/include/asm/hyperv.h
index b80420b..9c5ffef 100644
--- a/arch/x86/include/asm/hyperv.h
+++ b/arch/x86/include/asm/hyperv.h
@@ -136,6 +136,9 @@
 /* MSR used to read the per-partition time reference counter */
 #define HV_X64_MSR_TIME_REF_COUNT		0x4020
 
+/* A partition's reference time stamp counter (TSC) page */
+#define HV_X64_MSR_REFERENCE_TSC		0x4021
+
 /* Define the virtual APIC registers */
 #define HV_X64_MSR_EOI0x4070
 #define HV_X64_MSR_ICR0x4071
@@ -179,6 +182,10 @@
 #define HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_MASK	\
 		(~((1ull  HV_X64_MSR_APIC_ASSIST_PAGE_ADDRESS_SHIFT) - 1))
 
+#define HV_X64_MSR_TSC_REFERENCE_ENABLE			0x0001
+#define HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT		12
+
+
 #define HV_PROCESSOR_POWER_STATE_C0		0
 #define HV_PROCESSOR_POWER_STATE_C1		1
 #define HV_PROCESSOR_POWER_STATE_C2		2
@@ -191,4 +198,11 @@
 #define HV_STATUS_INVALID_ALIGNMENT		4
 #define HV_STATUS_INSUFFICIENT_BUFFERS		19
 
+typedef struct _HV_REFERENCE_TSC_PAGE {
+uint32_t TscSequence;
+uint32_t Rserved1;
+uint64_t TscScale;
+int64_t  TscOffset;
+} HV_REFERENCE_TSC_PAGE, * PHV_REFERENCE_TSC_PAGE;
+
 #endif
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b2e11f4..63ee09e 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -565,6 +565,8 @@ struct kvm_arch {
 	/* fields used by HYPER-V emulation */
 	u64 hv_guest_os_id;
 	u64 hv_hypercall;
+	u64 hv_ref_count;
+	u64 hv_tsc_page;
 
 	#ifdef CONFIG_KVM_MMU_AUDIT
 	int audit_point;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 4f76417..4538295 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -813,7 +813,7 @@ EXPORT_SYMBOL_GPL(kvm_rdpmc);
 static u32 msrs_to_save[] = {
 	MSR_KVM_SYSTEM_TIME, MSR_KVM_WALL_CLOCK,
 	MSR_KVM_SYSTEM_TIME_NEW, MSR_KVM_WALL_CLOCK_NEW,
-	HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL,
+	HV_X64_MSR_GUEST_OS_ID, HV_X64_MSR_HYPERCALL, HV_X64_MSR_REFERENCE_TSC,
 	HV_X64_MSR_APIC_ASSIST_PAGE, MSR_KVM_ASYNC_PF_EN, MSR_KVM_STEAL_TIME,
 	MSR_KVM_PV_EOI_EN,
 	MSR_IA32_SYSENTER_CS, MSR_IA32_SYSENTER_ESP, MSR_IA32_SYSENTER_EIP,
@@ -1428,6 +1428,8 @@ static bool kvm_hv_msr_partition_wide(u32 msr)
 	switch (msr) {
 	case HV_X64_MSR_GUEST_OS_ID:
 	case HV_X64_MSR_HYPERCALL:
+	case HV_X64_MSR_TIME_REF_COUNT:
+	case HV_X64_MSR_REFERENCE_TSC:
 		r = true;
 		break;
 	}
@@ -1438,6 +1440,7 @@ static bool kvm_hv_msr_partition_wide(u32 msr)
 static int set_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data)
 {
 	struct kvm *kvm = vcpu-kvm;
+	unsigned long addr;
 
 	switch (msr) {
 	case HV_X64_MSR_GUEST_OS_ID:
@@ -1467,6 +1470,27 @@ static int set_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 data)
 		if (__copy_to_user((void __user *)addr, instructions, 4))
 			return 1;
 		kvm-arch.hv_hypercall = data;
+		kvm-arch.hv_ref_count = get_kernel_ns();
+		break;
+	}
+	case HV_X64_MSR_REFERENCE_TSC: {
+		HV_REFERENCE_TSC_PAGE tsc_ref;
+		tsc_ref.TscSequence =
+			boot_cpu_has(X86_FEATURE_CONSTANT_TSC) ? 1 : 0;
+		tsc_ref.TscScale =
+			((1LL  32) /vcpu-arch.virtual_tsc_khz)  32;
+		tsc_ref.TscOffset = 0;
+		if (!(data  HV_X64_MSR_TSC_REFERENCE_ENABLE)) {
+			kvm-arch.hv_tsc_page = data;
+			break;
+		}
+		addr = gfn_to_hva(vcpu-kvm, data 
+			HV_X64_MSR_TSC_REFERENCE_ADDRESS_SHIFT);
+		if (kvm_is_error_hva(addr))
+			return 1;
+		if(__copy_to_user((void __user *)addr, tsc_ref, sizeof(tsc_ref)))
+			return 1;
+		kvm-arch.hv_tsc_page = data;
 		break;
 	}
 	default:
@@ -1881,6 +1905,13 @@ static int get_msr_hyperv_pw(struct kvm_vcpu *vcpu, u32 msr, u64 *pdata)
 	case HV_X64_MSR_HYPERCALL:
 		data = kvm-arch.hv_hypercall;
 		break;
+	case HV_X64_MSR_TIME_REF_COUNT:
+		data = get_kernel_ns() - kvm-arch.hv_ref_count;
+		do_div(data, 100);
+		break;
+	case HV_X64_MSR_REFERENCE_TSC:
+		data = kvm-arch.hv_tsc_page;
+		break;
 	default:
 		vcpu_unimpl(vcpu, Hyper-V unhandled rdmsr: 0x%x\n, msr);
 		return 1;
diff --git a/target-i386/cpu.c b/target-i386/cpu.c
index f3708e6..ad77b72 100644
--- a/target-i386/cpu.c
+++ b/target-i386/cpu.c
@@ -1250,6 +1250,8 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, const char *cpu_model)
 hyperv_enable_relaxed_timing(true);

Re: Performance issue

2012-11-28 Thread George-Cristian Bîrzan
On Wed, Nov 28, 2012 at 1:39 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
 On Tuesday, November 27, 2012 11:13:12 PM George-Cristian Bîrzan wrote:
 On Tue, Nov 27, 2012 at 10:38 PM, Vadim Rozenfeld vroze...@redhat.com
 wrote:
  I have some code which do both reference time and invariant TSC but it
  will not work after migration. I will send it later today.

 Do you mean migrating guests? This is not an issue for us.
 OK, but don't say I didn't warn you :)

 There are two patches, one for kvm and another one for qemu.
 you will probably need to rebase them.
 Add hv_tsc cpu parameter to activate this feature.
 you will probably need to deactivate hpet by adding -no-hpet
 parameter as well.

I've also added +hv_relaxed since then, but this is the command I'm
using now and there's no change:

/usr/bin/qemu-kvm -name b691546e-79f8-49c6-a293-81067503a6ad -S -M
pc-1.2 -enable-kvm -m 16384 -smp 9,sockets=1,cores=9,threads=1 -uuid
b691546e-79f8-49c6-a293-81067503a6ad -no-user-config -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/b691546e-79f8-49c6-a293-81067503a6ad.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-hpet -no-shutdown -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/lib/libvirt/images/dis-magnetics-2-223101/d8b233c6-8424-4de9-ae3c-7c9a60288514,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback,aio=native
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2e:fb:a2:36:be,bus=pci.0,addr=0x3
-netdev tap,fd=40,id=hostnet1,vhost=on,vhostfd=41 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=22:94:44:5a:cb:24,bus=pci.0,addr=0x4
-vnc 127.0.0.1:0,password -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -cpu host,hv_tsc

I compiled qemu-1.2.0-24 after applying your patch, used the head for
KVM, and I see no difference. I've tried setting windows'
useplatformclock on and off, no change either.


Other than that, was looking into a profiling trace of the software
running and a lot of time (60%?) is spent calling two functions from
hal.dll, HalpGetPmTimerSleepModePerfCounter when I disable HPET, and
HalpHPETProgramRolloverTimer which do point at something related to
the timers.

Any other thing I can try?


--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-28 Thread George-Cristian Bîrzan
On Wed, Nov 28, 2012 at 1:39 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
 There are two patches, one for kvm and another one for qemu.

I just realised this. I was supposed to use qemu, or qemu-kvm? I used qemu

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-28 Thread Gleb Natapov
On Wed, Nov 28, 2012 at 09:18:38PM +0200, George-Cristian Bîrzan wrote:
 On Wed, Nov 28, 2012 at 1:39 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
  There are two patches, one for kvm and another one for qemu.
 
 I just realised this. I was supposed to use qemu, or qemu-kvm? I used qemu
 
Does not matter, but you need to also recompile kernel with the first patch.

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-28 Thread George-Cristian Bîrzan
On Wed, Nov 28, 2012 at 9:56 PM, Gleb Natapov g...@redhat.com wrote:
 On Wed, Nov 28, 2012 at 09:18:38PM +0200, George-Cristian Bîrzan wrote:
 On Wed, Nov 28, 2012 at 1:39 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
  There are two patches, one for kvm and another one for qemu.

 I just realised this. I was supposed to use qemu, or qemu-kvm? I used qemu

 Does not matter, but you need to also recompile kernel with the first patch.

Do I have to recompile the kernel, or just the module? I followed the
instructions at
http://www.linux-kvm.org/page/Code#building_an_external_module_with_older_kernels
but I guess I can do the whole kernel, if it might help.

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-28 Thread Gleb Natapov
On Wed, Nov 28, 2012 at 10:01:04PM +0200, George-Cristian Bîrzan wrote:
 On Wed, Nov 28, 2012 at 9:56 PM, Gleb Natapov g...@redhat.com wrote:
  On Wed, Nov 28, 2012 at 09:18:38PM +0200, George-Cristian Bîrzan wrote:
  On Wed, Nov 28, 2012 at 1:39 PM, Vadim Rozenfeld vroze...@redhat.com 
  wrote:
   There are two patches, one for kvm and another one for qemu.
 
  I just realised this. I was supposed to use qemu, or qemu-kvm? I used qemu
 
  Does not matter, but you need to also recompile kernel with the first patch.
 
 Do I have to recompile the kernel, or just the module? I followed the
 instructions at
 http://www.linux-kvm.org/page/Code#building_an_external_module_with_older_kernels
 but I guess I can do the whole kernel, if it might help.
 
Module is enough, but kvm-kmod is not what you want. Just rebuild the
whole kernel if you do not know how to rebuild only the module for your
distribution's kernel.

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-27 Thread Gleb Natapov
On Mon, Nov 26, 2012 at 09:31:19PM +0200, George-Cristian Bîrzan wrote:
 On Sun, Nov 25, 2012 at 6:17 PM, George-Cristian Bîrzan g...@birzan.org 
 wrote:
  On Sun, Nov 25, 2012 at 5:19 PM, Gleb Natapov g...@redhat.com wrote:
  What Windows is this? Can you try changing -cpu host to -cpu
  host,+hv_relaxed?
 
  This is on Windows Server 2008 R2 (sorry, forgot to mention that I
  guess), and I can try it tomorrow (US time), as getting a stream my
  way depends on complicated stuff. I will though, and let you know how
  it goes.
 
 I changed that, no difference.
 
 
Heh, I forgot that the part that should make difference is not yet
upstream :(

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-27 Thread George-Cristian Bîrzan
On Tue, Nov 27, 2012 at 2:20 PM, Gleb Natapov g...@redhat.com wrote:
 On Mon, Nov 26, 2012 at 09:31:19PM +0200, George-Cristian Bîrzan wrote:
 On Sun, Nov 25, 2012 at 6:17 PM, George-Cristian Bîrzan g...@birzan.org 
 wrote:
  On Sun, Nov 25, 2012 at 5:19 PM, Gleb Natapov g...@redhat.com wrote:
  What Windows is this? Can you try changing -cpu host to -cpu
  host,+hv_relaxed?
 
  This is on Windows Server 2008 R2 (sorry, forgot to mention that I
  guess), and I can try it tomorrow (US time), as getting a stream my
  way depends on complicated stuff. I will though, and let you know how
  it goes.

 I changed that, no difference.


 Heh, I forgot that the part that should make difference is not yet
 upstream :(

We can try recompiling kvm/qemu with some patches, if that'd help. At
this point, anything is on the table except changing Windows and the
hardware :-)

Also, it might be that the software doing the actual work is not well
written, but even so...

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-27 Thread Gleb Natapov
On Tue, Nov 27, 2012 at 02:29:20PM +0200, George-Cristian Bîrzan wrote:
 On Tue, Nov 27, 2012 at 2:20 PM, Gleb Natapov g...@redhat.com wrote:
  On Mon, Nov 26, 2012 at 09:31:19PM +0200, George-Cristian Bîrzan wrote:
  On Sun, Nov 25, 2012 at 6:17 PM, George-Cristian Bîrzan g...@birzan.org 
  wrote:
   On Sun, Nov 25, 2012 at 5:19 PM, Gleb Natapov g...@redhat.com wrote:
   What Windows is this? Can you try changing -cpu host to -cpu
   host,+hv_relaxed?
  
   This is on Windows Server 2008 R2 (sorry, forgot to mention that I
   guess), and I can try it tomorrow (US time), as getting a stream my
   way depends on complicated stuff. I will though, and let you know how
   it goes.
 
  I changed that, no difference.
 
 
  Heh, I forgot that the part that should make difference is not yet
  upstream :(
 
 We can try recompiling kvm/qemu with some patches, if that'd help. At
 this point, anything is on the table except changing Windows and the
 hardware :-)

Vadim do you have Hyper-v reference timer patches for KVM to try?

 
 Also, it might be that the software doing the actual work is not well
 written, but even so...
 
 --
 George-Cristian Bîrzan

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-27 Thread Vadim Rozenfeld
On Tuesday, November 27, 2012 04:54:47 PM Gleb Natapov wrote:
 On Tue, Nov 27, 2012 at 02:29:20PM +0200, George-Cristian Bîrzan wrote:
  On Tue, Nov 27, 2012 at 2:20 PM, Gleb Natapov g...@redhat.com wrote:
   On Mon, Nov 26, 2012 at 09:31:19PM +0200, George-Cristian Bîrzan wrote:
   On Sun, Nov 25, 2012 at 6:17 PM, George-Cristian Bîrzan 
   g...@birzan.org 
wrote:
On Sun, Nov 25, 2012 at 5:19 PM, Gleb Natapov g...@redhat.com 
wrote:
What Windows is this? Can you try changing -cpu host to -cpu
host,+hv_relaxed?

This is on Windows Server 2008 R2 (sorry, forgot to mention that I
guess), and I can try it tomorrow (US time), as getting a stream my
way depends on complicated stuff. I will though, and let you know
how it goes.
   
   I changed that, no difference.
   
   Heh, I forgot that the part that should make difference is not yet
   upstream :(
  
  We can try recompiling kvm/qemu with some patches, if that'd help. At
  this point, anything is on the table except changing Windows and the
  hardware :-)
 
 Vadim do you have Hyper-v reference timer patches for KVM to try?
I have some code which do both reference time and invariant TSC but it
will not work after migration. I will send it later today.
Vadim.
 
  Also, it might be that the software doing the actual work is not well
  written, but even so...
  
  --
  George-Cristian Bîrzan
 
 --
   Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-27 Thread George-Cristian Bîrzan
On Tue, Nov 27, 2012 at 10:38 PM, Vadim Rozenfeld vroze...@redhat.com wrote:
 I have some code which do both reference time and invariant TSC but it
 will not work after migration. I will send it later today.

Do you mean migrating guests? This is not an issue for us.

Also, it would be much appreciated!

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-26 Thread George-Cristian Bîrzan
On Sun, Nov 25, 2012 at 6:17 PM, George-Cristian Bîrzan g...@birzan.org wrote:
 On Sun, Nov 25, 2012 at 5:19 PM, Gleb Natapov g...@redhat.com wrote:
 What Windows is this? Can you try changing -cpu host to -cpu
 host,+hv_relaxed?

 This is on Windows Server 2008 R2 (sorry, forgot to mention that I
 guess), and I can try it tomorrow (US time), as getting a stream my
 way depends on complicated stuff. I will though, and let you know how
 it goes.

I changed that, no difference.


--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-25 Thread Gleb Natapov
On Thu, Nov 22, 2012 at 09:17:34PM +0200, George-Cristian Bîrzan wrote:
 I'm trying to understand a performance problem (50% degradation in the
 VM) that I'm experiencing some systems with qemu-kvm. Running Fedora
 with 3.5.3-1.fc17.x86_64 or 3.6.6-1.fc17.x86_64, qemu 1.0.1 or 1.2.1
 on AMD Opteron 6176 and 6174, and all of them behave identically.
 
 A Windows guest is receiving a UDP MPEG stream that is being processed
 by TSReader. The stream comes in at about 73Mbps, but the VM cannot
 process more than 43Mbps. It's not a networking issue, the packets
 reach the guest and with iperf we can easily do 80Mbps. Also, with
 iperf, it can receive the packets from the streamer (even though it
 doesn't detect things properly, but it was just a way to see ).
 
 However, on an identical host (a 6174 CPU, even), a Windows install
 has absolutely no problem processing the same stream.
 
What Windows is this? Can you try changing -cpu host to -cpu
host,+hv_relaxed?

--
Gleb.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-25 Thread George-Cristian Bîrzan
On Sun, Nov 25, 2012 at 5:19 PM, Gleb Natapov g...@redhat.com wrote:
 What Windows is this? Can you try changing -cpu host to -cpu
 host,+hv_relaxed?

This is on Windows Server 2008 R2 (sorry, forgot to mention that I
guess), and I can try it tomorrow (US time), as getting a stream my
way depends on complicated stuff. I will though, and let you know how
it goes.

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Fwd: Performance issue

2012-11-23 Thread George-Cristian Bîrzan
On Fri, Nov 23, 2012 at 9:26 AM, Stefan Hajnoczi stefa...@gmail.com wrote:
 Hi George-Cristian,
 On IRC you mentioned you found a solution.  Any updates?  Are you still
 seeing the performance problem?

It wasn't a solution, I just thought I knew why. I was thinking the
73Mbps were coming in at 188 bytes per packet, which would've been too
many packets for the machine to handle, probably. Turns out, the
stream is coming in at 1358 bytes, which means I'm back to square one.

Also, I just got in to work, and will try to write my own program to
read the stream. The actual workload that these VMs will have to do is
actually not as simple as just decoding the stream, they have to
transcode them, but I don't have access to the source to see exactly
what it's doing (same withe tsreader, but at least that's not
something in house for our customer.)

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Performance issue

2012-11-22 Thread George-Cristian Bîrzan
I'm trying to understand a performance problem (50% degradation in the
VM) that I'm experiencing some systems with qemu-kvm. Running Fedora
with 3.5.3-1.fc17.x86_64 or 3.6.6-1.fc17.x86_64, qemu 1.0.1 or 1.2.1
on AMD Opteron 6176 and 6174, and all of them behave identically.

A Windows guest is receiving a UDP MPEG stream that is being processed
by TSReader. The stream comes in at about 73Mbps, but the VM cannot
process more than 43Mbps. It's not a networking issue, the packets
reach the guest and with iperf we can easily do 80Mbps. Also, with
iperf, it can receive the packets from the streamer (even though it
doesn't detect things properly, but it was just a way to see ).

However, on an identical host (a 6174 CPU, even), a Windows install
has absolutely no problem processing the same stream.

This is the command we're using to start qemu-kvm:

/usr/bin/qemu-kvm -name b691546e-79f8-49c6-a293-81067503a6ad -S -M
pc-1.2 -cpu host -enable-kvm -m 16384 -smp
16,sockets=1,cores=16,threads=1 -uuid
b691546e-79f8-49c6-a293-81067503a6ad -no-user-config -nodefaults
-chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/b691546e-79f8-49c6-a293-81067503a6ad.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
-drive 
file=/var/lib/libvirt/images/dis-magnetics-2-223101/d8b233c6-8424-4de9-ae3c-7c9a60288514,if=none,id=drive-virtio-disk0,format=qcow2,cache=writeback,aio=native
-device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=31 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=22:2e:fb:a2:36:be,bus=pci.0,addr=0x3
-netdev tap,fd=32,id=hostnet1,vhost=on,vhostfd=33 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=22:94:44:5a:cb:24,bus=pci.0,addr=0x4
-vnc 127.0.0.1:4,password -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6

As a sidenote, the TSReader application only uses one thread for
decoding the stream, one for network IO. While using more threads
would solve the problem.

I've tried smaller guest, with 5 cores, pinned all of them to CPUs 6
to 11 (all in a NUMA node), each to an individual CPU, I've tried
enabling huge pages/TLB thingy... and that's about it. I'm completely
stuck.

Is this 50% hit something that's considered 'okay', or am I doing
something wrong? And if the latter, what/how can I debug it?

--
George-Cristian Bîrzan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performance issue

2012-11-22 Thread Stefan Hajnoczi
On Thu, Nov 22, 2012 at 09:17:34PM +0200, George-Cristian Bîrzan wrote:
 I'm trying to understand a performance problem (50% degradation in the
 VM) that I'm experiencing some systems with qemu-kvm. Running Fedora
 with 3.5.3-1.fc17.x86_64 or 3.6.6-1.fc17.x86_64, qemu 1.0.1 or 1.2.1
 on AMD Opteron 6176 and 6174, and all of them behave identically.
 
 A Windows guest is receiving a UDP MPEG stream that is being processed
 by TSReader. The stream comes in at about 73Mbps, but the VM cannot
 process more than 43Mbps. It's not a networking issue, the packets
 reach the guest and with iperf we can easily do 80Mbps. Also, with
 iperf, it can receive the packets from the streamer (even though it
 doesn't detect things properly, but it was just a way to see ).

Hi George-Cristian,
On IRC you mentioned you found a solution.  Any updates?  Are you still
seeing the performance problem?

Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: virtio + vhost-net performance issue - preadv ?

2012-11-15 Thread Ben Clay
David-

Thanks for the followup.  That is disappointing, and I wish I knew why the
performance is so poor.  With the kernel and qemu replaced, I don't know
where the limitation is - raising the MTU makes no difference, and I also
tried a few different kernels inside the guest.  The network stack can
natively support high throughput (~23Gbps on IP over Infiniband on these
nodes), so I'm at a loss.  Maybe it's the lack of preadv?

Ben Clay
rbc...@ncsu.edu


-Original Message-
From: David Cruz [mailto:david.c...@gigas.com] 
Sent: Wednesday, November 14, 2012 1:27 AM
To: Ben Clay
Subject: Re: virtio + vhost-net performance issue - preadv ?

Got the same results here last year when we were testing.

In the end, we use only CentOS6. And even more, we changed the kernel to
3.5.5 due to unstable Windows virtualization when using several Windows
Server in the same Hypervisor.

2-4 GBit/s in Centos5 is acceptable. I think that's the max you can get in
that version.

David

2012/11/14 Ben Clay rbc...@ncsu.edu:
 I have a working copy of libvirt 0.10.2 + qemu 1.2 installed on a 
 vanilla up-to-date (2.6.32-279.9.1) CentOS 6 host, and get very good 
 VM - VM network performance (both running on the same host) using 
 virtio.  I have cgroups set to cap the VMs at 10Gbps and iperf shows 
 I'm getting exactly 10Gbps.

 I copied these VMs to a CentOS 5 host and installed libvirt 1.0 + qemu
1.2.
 However, the best performance I can get in between the VMs (again 
 running on the same host) is ~2Gbps.  In both cases, this is over a 
 bridged interface with static IPs assigned to each VM.  I've also 
 tried virtual networking with NAT or routing, yielding the same results.

 I figured it was due to vhost-net missing on the older CentOS 5 
 kernel, so I installed 2.6.39-4.2 from ELRepo and got the 
 /dev/vhost-net device and vhost processes associated with each VM:

 ]$ lsmod | grep vhost
 vhost_net  28446  2
 tun23888  7 vhost_net

 ]$ ps aux | grep vhost-
 root  9628  0.0  0.0  0 0 ?S17:57   0:00
 [vhost-9626]
 root  9671  0.0  0.0  0 0 ?S17:57   0:00
 [vhost-9670]

 ]$ ls /dev/vhost-net -al
 crw--- 1 root root 10, 58 Nov 13 15:19 /dev/vhost-net

 After installing the new kernel, I also tried rebuilding libvirt and 
 qemu, to no avail.  I also disabled cgroups, just in case it was 
 getting in the way, as well as iptables.  I can see the virtio_net 
 module loaded inside the guest, and using virtio raises my performance 
 from 400Mbps to 2Gbps, so it does make some improvement.

 The only differences between the two physical hosts that I can find are:

 - qemu on the CentOS 5 host builds without preadv support - would this 
 make such a huge performance difference?  CentOS5 only comes with an 
 old version of glibc, which is missing preadv
 - qemu on the CentOS 5 host builds without PIE
 - libvirt 1.0 was required on the CentOS 5 host, since 0.10.2 had a 
 build bug. This shouldn't matter I don't think.
 - I haven't tried rebuilding the VMs from scratch on the CentOS5 host, 
 which I guess is worth a try.

 The qemu process is being started with virtio + vhost:

 /usr/bin/qemu-system-x86_64 -name vmname -S -M pc-1.2 -enable-kvm -m 
 4096 -smp 8,sockets=8,cores=1,threads=1 -uuid
 212915ed-a34a-4d6d-68f5-2216083a7693 -no-user-config -nodefaults 
 -chardev 
 socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname.monitor,server
 ,nowai t -mon chardev=charmonitor,id=monitor,mode=control -rtc 
 base=utc -no-shutdown -device 
 piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
 file=/mnt/vmname/disk.img,if=none,id=drive-virtio-disk0,format=raw,cac
 he=non
 e -device
 virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id
 =virti
 o-disk0,bootindex=1 -netdev tap,fd=16,id=hostnet0,vhost=on,vhostfd=18
 -device
 virtio-net-pci,netdev=hostnet0,id=net0,mac=00:11:22:33:44:55,bus=pci.0
 ,addr=
 0x3 -chardev pty,id=charserial0 -device
 isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 
 -vnc
 127.0.0.1:1 -vga cirrus -device
 virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

 The relevant part of my libvirt config, of which I've tried omitting 
 the target, alias and address elements with no difference in performance:

interface type=bridge
   mac address=00:11:22:33:44:55/
   source bridge=br0/
   target dev=vnet0/
   model type=virtio/
   alias name=net0/
   address type=pci domain=0x bus=0x00 slot=0x03
 function=0x0/
 /interface

 Is there something else which could be getting in the way here?

 Thanks!

 Ben Clay
 rbc...@ncsu.edu



 --
 To unsubscribe from this list: send the line unsubscribe kvm in the 
 body of a message to majord...@vger.kernel.org More majordomo info at  
 http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http

virtio + vhost-net performance issue - preadv ?

2012-11-13 Thread Ben Clay
I have a working copy of libvirt 0.10.2 + qemu 1.2 installed on a vanilla
up-to-date (2.6.32-279.9.1) CentOS 6 host, and get very good VM - VM
network performance (both running on the same host) using virtio.  I have
cgroups set to cap the VMs at 10Gbps and iperf shows I'm getting exactly
10Gbps.

I copied these VMs to a CentOS 5 host and installed libvirt 1.0 + qemu 1.2.
However, the best performance I can get in between the VMs (again running on
the same host) is ~2Gbps.  In both cases, this is over a bridged interface
with static IPs assigned to each VM.  I've also tried virtual networking
with NAT or routing, yielding the same results.

I figured it was due to vhost-net missing on the older CentOS 5 kernel, so I
installed 2.6.39-4.2 from ELRepo and got the /dev/vhost-net device and vhost
processes associated with each VM:

]$ lsmod | grep vhost
vhost_net  28446  2 
tun23888  7 vhost_net

]$ ps aux | grep vhost-
root  9628  0.0  0.0  0 0 ?S17:57   0:00
[vhost-9626]
root  9671  0.0  0.0  0 0 ?S17:57   0:00
[vhost-9670]

]$ ls /dev/vhost-net -al
crw--- 1 root root 10, 58 Nov 13 15:19 /dev/vhost-net

After installing the new kernel, I also tried rebuilding libvirt and qemu,
to no avail.  I also disabled cgroups, just in case it was getting in the
way, as well as iptables.  I can see the virtio_net module loaded inside the
guest, and using virtio raises my performance from 400Mbps to 2Gbps, so it
does make some improvement.

The only differences between the two physical hosts that I can find are:

- qemu on the CentOS 5 host builds without preadv support - would this make
such a huge performance difference?  CentOS5 only comes with an old version
of glibc, which is missing preadv
- qemu on the CentOS 5 host builds without PIE
- libvirt 1.0 was required on the CentOS 5 host, since 0.10.2 had a build
bug. This shouldn't matter I don't think.
- I haven't tried rebuilding the VMs from scratch on the CentOS5 host, which
I guess is worth a try.

The qemu process is being started with virtio + vhost:

/usr/bin/qemu-system-x86_64 -name vmname -S -M pc-1.2 -enable-kvm -m 4096
-smp 8,sockets=8,cores=1,threads=1 -uuid
212915ed-a34a-4d6d-68f5-2216083a7693 -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname.monitor,server,nowai
t -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/mnt/vmname/disk.img,if=none,id=drive-virtio-disk0,format=raw,cache=non
e -device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virti
o-disk0,bootindex=1 -netdev tap,fd=16,id=hostnet0,vhost=on,vhostfd=18
-device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:11:22:33:44:55,bus=pci.0,addr=
0x3 -chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc
127.0.0.1:1 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

The relevant part of my libvirt config, of which I've tried omitting the
target, alias and address elements with no difference in performance:

   interface type=bridge
  mac address=00:11:22:33:44:55/
  source bridge=br0/
  target dev=vnet0/
  model type=virtio/
  alias name=net0/
  address type=pci domain=0x bus=0x00 slot=0x03
function=0x0/
/interface

Is there something else which could be getting in the way here?

Thanks!

Ben Clay
rbc...@ncsu.edu



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-18 Thread Zhang, Xiantao
Srivatsa Vaddagiri wrote:
 On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
 On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
 
 Certainly that has even greater potential for Linux guests.  Note
 that we spin on mutexes now, so we need to prevent preemption while
 the lock owner is running.
 
 either that, or disable spinning on (para) virt kernels. Para virt
 kernels could possibly extend the thing by also checking to see if
 the owner's vcpu is running.
 
 I suspect we will need a combination of both approaches, given that
 we will not be able to avoid preempting guests in their critical
 section always (too long critical sections or real-time tasks wanting
 to preempt). Other idea is to gang-schedule VCPUs of the same guest
 as much as possible? 
Gang-scheduling maybe the ideal solution to solve the issue, and has to change 
host's scheduler a lot to implement it, and it maybe hard to be upstream.  So 
can we figure out an easy way(maybe not best) for this ? 
Xiantao
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-17 Thread Avi Kivity

On 04/16/2010 05:27 AM, Zhang, Xiantao wrote:




When vcpus are pinned to pcpus, there is a 50% chance that a guest's
vcpus will be co-scheduled and spinlocks will perform will.

When vcpus are not pinned, but affine wakeups are disabled, there is a
33% chance that vcpus will be co-scheduled.

When vcpus are not pinned and affine wakeups are enabled there is a 0%
chance that vcpus will be co-scheduled.

Keeping both vcpus on the same core actually makes sense since they
can communicate through the local cache faster than across cores.
What we need is to make sure that they don't spin.

Windows 2008 can report spinlock spinning through a hypercall.  Can
you hook to that interface and see if it happens regularly?
Altenatively use a PLE capable host and trace the kvm_vcpu_on_spin()
function.
 

We only tried windows 2003 for the experiments, and have no data related to 
windows 2008.
But maybe we can have  a try later.  Anyway, the key point is we have to 
enhance the scheduler to let it
Know which threads are vcpu threads to avoid perf loss in this case.
   


I have two worries about this approach:

1.  Affine wakeups were introduced for a reason; if we disable them 
(even just for vcpus), we lost something.  Maybe we can tune the 
mechanism not to fail, instead of disabling it.


2.  Affine wakeups are a scheduler internal detail.  How do we explain 
what it does?  the scheduler may not have affine wakeups in a few years, 
yet we'll have an ABI to disable them.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-17 Thread Avi Kivity

On 04/15/2010 04:33 PM, Peter Zijlstra wrote:

On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
   

Certainly that has even greater potential for Linux guests.  Note that
we spin on mutexes now, so we need to prevent preemption while the lock
owner is running.
 

either that, or disable spinning on (para) virt kernels.


What would you do instead?

Note we can't disable spinning on Windows or pre 2.6.36 kernels.


Para virt
kernels could possibly extend the thing by also checking to see if the
owner's vcpu is running.
   


Certainly that's worth doing.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-16 Thread Peter Zijlstra
On Thu, 2010-04-15 at 09:43 -0700, Srivatsa Vaddagiri wrote:
 On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
  On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
   
   Certainly that has even greater potential for Linux guests.  Note that 
   we spin on mutexes now, so we need to prevent preemption while the lock 
   owner is running. 
  
  either that, or disable spinning on (para) virt kernels. Para virt
  kernels could possibly extend the thing by also checking to see if the
  owner's vcpu is running.
 
 I suspect we will need a combination of both approaches, given that we will 
 not
 be able to avoid preempting guests in their critical section always (too long
 critical sections or real-time tasks wanting to preempt). Other idea is to
 gang-schedule VCPUs of the same guest as much as possible?

Except gang scheduling is a scalability nightmare waiting to happen. I
much prefer this hint thing.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-15 Thread Avi Kivity

On 04/15/2010 07:58 AM, Srivatsa Vaddagiri wrote:
On Sun, Apr 11, 2010 at 11:40 PM, Avi Kivity a...@redhat.com 
mailto:a...@redhat.com wrote:


The current handing of PLE is very suboptimal.  With proper
directed yield we should be much better there.



Hi Avi,
  By directed yield, do you mean transfer the timeslice of 
one thread (which is contending for a lock) to another thread (which 
is holding a lock)?


It's a priority transfer (in CFS terms, vruntime) (we don't know who 
holds the lock, so we pick a co-vcpu at random).


If at that point in time, the lock-holder thread/VCPU is actually not 
running currently, ie it is at the back of the runqueue, would it help 
much? In such case, it will take time for the lock holder to run again 
and the default timeslice it would have got could have been sufficient 
to release the lock?


The idea is to increase the chances to the target vcpu to run, and to 
decrease the changes of the spinner to run (hopefully they change places).




I am also working on a prototype for some other technique here - to 
avoid preempting guest threads/VCPUs in the middle of their 
(spin-lock) critical section. This requires guest to hint host when 
there are in such a section. [1] has shown 33% improvement to an 
apache benchmark based on this idea.




Certainly that has even greater potential for Linux guests.  Note that 
we spin on mutexes now, so we need to prevent preemption while the lock 
owner is running.



--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-15 Thread Peter Zijlstra
On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
 
 Certainly that has even greater potential for Linux guests.  Note that 
 we spin on mutexes now, so we need to prevent preemption while the lock 
 owner is running. 

either that, or disable spinning on (para) virt kernels. Para virt
kernels could possibly extend the thing by also checking to see if the
owner's vcpu is running.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-15 Thread Srivatsa Vaddagiri
On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
 On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
  
  Certainly that has even greater potential for Linux guests.  Note that 
  we spin on mutexes now, so we need to prevent preemption while the lock 
  owner is running. 
 
 either that, or disable spinning on (para) virt kernels. Para virt
 kernels could possibly extend the thing by also checking to see if the
 owner's vcpu is running.

I suspect we will need a combination of both approaches, given that we will not
be able to avoid preempting guests in their critical section always (too long
critical sections or real-time tasks wanting to preempt). Other idea is to
gang-schedule VCPUs of the same guest as much as possible?

- vatsa
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-15 Thread Zhang, Xiantao
Avi Kivity wrote:
 On 04/14/2010 06:24 AM, Zhang, Xiantao wrote:
 
 Spin loops need to be addressed first, they are known to kill
 performance in overcommit situations.
 
 
 Even in overcommit case, if vcpu threads of one qemu are not
 scheduled or pulled to the same logical processor, the performance
 drop is tolerant like Xen's case today. But for KVM, it has to
 suffer from additional performance loss, since host's scheduler
 actively pulls these vcpu threads together.
 
 
 
 Can you quantify this loss?  Give examples of what happens?
 
 For example, one machine is configured with 2 pCPUs and there are
 two Windows guests running on the machine, and each guest is
 cconfigured with 2 vcpus and one webbench server runs in it.  
 If use host's default scheduler, webbench's performance is very bad,
 but if pin each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can
 see 5-10X performance improvement with same CPU utilization.  
 In addition, we also see kvm's perf scalability is also impacted in
 large systems, for some performance experiments, kvm's perf begins
 to drop when vCPU is overcommitted and pCPU are saturated, but once
 the wake_up_affine feature is switched off in scheduler, kvm's perf
 can keep rising in this case.
 
 
 Ok.  This is probably due to spinlock contention.

Yes, exactly. 

 When vcpus are pinned to pcpus, there is a 50% chance that a guest's
 vcpus will be co-scheduled and spinlocks will perform will.
 
 When vcpus are not pinned, but affine wakeups are disabled, there is a
 33% chance that vcpus will be co-scheduled.
 
 When vcpus are not pinned and affine wakeups are enabled there is a 0%
 chance that vcpus will be co-scheduled.
 
 Keeping both vcpus on the same core actually makes sense since they
 can communicate through the local cache faster than across cores. 
 What we need is to make sure that they don't spin.
 
 Windows 2008 can report spinlock spinning through a hypercall.  Can
 you hook to that interface and see if it happens regularly? 
 Altenatively use a PLE capable host and trace the kvm_vcpu_on_spin()
 function. 
We only tried windows 2003 for the experiments, and have no data related to 
windows 2008. 
But maybe we can have  a try later.  Anyway, the key point is we have to 
enhance the scheduler to let it 
Know which threads are vcpu threads to avoid perf loss in this case.
Xiantao
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-14 Thread Avi Kivity

On 04/14/2010 06:24 AM, Zhang, Xiantao wrote:



Spin loops need to be addressed first, they are known to kill
performance in overcommit situations.

 

Even in overcommit case, if vcpu threads of one qemu are not
scheduled or pulled to the same logical processor, the performance
drop is tolerant like Xen's case today. But for KVM, it has to
suffer from additional performance loss, since host's scheduler
actively pulls these vcpu threads together.


   

Can you quantify this loss?  Give examples of what happens?
 

For example, one machine is configured with 2 pCPUs and there are two Windows 
guests running on the machine, and each guest is cconfigured with 2 vcpus and 
one webbench server runs in it.
If use host's default scheduler, webbench's performance is very bad, but if pin 
each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can see 5-10X performance 
improvement with same CPU utilization.
In addition, we also see kvm's perf scalability is also impacted in large 
systems, for some performance experiments, kvm's perf begins to drop when vCPU 
is overcommitted and pCPU are saturated, but once the wake_up_affine feature is 
switched off in scheduler, kvm's perf can keep rising in this case.
   


Ok.  This is probably due to spinlock contention.

When vcpus are pinned to pcpus, there is a 50% chance that a guest's 
vcpus will be co-scheduled and spinlocks will perform will.


When vcpus are not pinned, but affine wakeups are disabled, there is a 
33% chance that vcpus will be co-scheduled.


When vcpus are not pinned and affine wakeups are enabled there is a 0% 
chance that vcpus will be co-scheduled.


Keeping both vcpus on the same core actually makes sense since they can 
communicate through the local cache faster than across cores.  What we 
need is to make sure that they don't spin.


Windows 2008 can report spinlock spinning through a hypercall.  Can you 
hook to that interface and see if it happens regularly?  Altenatively 
use a PLE capable host and trace the kvm_vcpu_on_spin() function.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-13 Thread Avi Kivity

On 04/13/2010 03:50 AM, Zhang, Xiantao wrote:

Avi Kivity wrote:
   

On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
 
   

What was the performance hit?  What was your I/O setup (image
format, using aio?)

 

The issue only happens when vcpu number is over-committed(e.g.
vcpu/pcpu2) and physical cpus are saturated. For example,  when run
webbench in windows OS in this case, its performance drops by 80%.
In our experiment, we are using image file through virtio, and I
think aio should be used by default also.

   

Is this on a machine that does pause-loop exits?  The current handing
of PLE is very suboptimal.  With proper directed yield we should be
much better there.

Without PLE, we need paravirtualized spinlocks, no way around it.
 

PLE has the ability to eliminate the issue at some extent, and pv solution 
should be helpful also.  But for windows guests running on machines without 
PLE, we still needs to enhance host side to resolve the issue.
   


Well, was this on a machine with PLE or without PLE?


Spin loops need to be addressed first, they are known to kill
performance in overcommit situations.
 

Even in overcommit case, if vcpu threads of one qemu are not scheduled or 
pulled to the same logical processor, the performance drop is tolerant like 
Xen's case today. But for KVM, it has to suffer from additional performance 
loss, since host's scheduler actively pulls these vcpu threads together.

   


Can you quantify this loss?  Give examples of what happens?


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-13 Thread Zhang, Xiantao
Avi Kivity wrote:
 On 04/13/2010 03:50 AM, Zhang, Xiantao wrote:
 Avi Kivity wrote:
 
 On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
 
 
 What was the performance hit?  What was your I/O setup (image
 format, using aio?) 
 
 
 The issue only happens when vcpu number is over-committed(e.g.
 vcpu/pcpu2) and physical cpus are saturated. For example,  when
 run webbench in windows OS in this case, its performance drops by
 80%. In our experiment, we are using image file through virtio,
 and I think aio should be used by default also.
 
 
 Is this on a machine that does pause-loop exits?  The current
 handing of PLE is very suboptimal.  With proper directed yield we
 should be much better there. 
 
 Without PLE, we need paravirtualized spinlocks, no way around it.
 
 PLE has the ability to eliminate the issue at some extent, and pv
 solution should be helpful also.  But for windows guests running on
 machines without PLE, we still needs to enhance host side to resolve
 the issue.   
 
 
 Well, was this on a machine with PLE or without PLE?

I am saying the machine has no PLE feature support. Even with PLE feature 
support, there is still performance loss due to PLE's cost. 
 
 Spin loops need to be addressed first, they are known to kill
 performance in overcommit situations.
 
 Even in overcommit case, if vcpu threads of one qemu are not
 scheduled or pulled to the same logical processor, the performance
 drop is tolerant like Xen's case today. But for KVM, it has to
 suffer from additional performance loss, since host's scheduler
 actively pulls these vcpu threads together.
 
 
 Can you quantify this loss?  Give examples of what happens?

For example, one machine is configured with 2 pCPUs and there are two Windows 
guests running on the machine, and each guest is cconfigured with 2 vcpus and 
one webbench server runs in it. 
If use host's default scheduler, webbench's performance is very bad, but if pin 
each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can see 5-10X performance 
improvement with same CPU utilization. 
In addition, we also see kvm's perf scalability is also impacted in large 
systems, for some performance experiments, kvm's perf begins to drop when vCPU 
is overcommitted and pCPU are saturated, but once the wake_up_affine feature is 
switched off in scheduler, kvm's perf can keep rising in this case.   
Xiantao

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-12 Thread Avi Kivity

On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:



What was the performance hit?  What was your I/O setup (image format,
using aio?)
 

The issue only happens when vcpu number is over-committed(e.g. vcpu/pcpu2) and 
physical cpus are saturated. For example,  when run webbench in windows OS in this 
case, its performance drops by 80%.  In our experiment, we are using image file 
through virtio, and I think aio should be used by default also.
   


Is this on a machine that does pause-loop exits?  The current handing of 
PLE is very suboptimal.  With proper directed yield we should be much 
better there.


Without PLE, we need paravirtualized spinlocks, no way around it.


After analysis about Linux scheduler, we found it is indeed caused
by the known features of Linux schduler, such as AFFINE_WAKEUPS,
SYNC_WAKEUPS etc. With these features on, linux schduler often tries
to schedule the vcpu threads of one guests to one same logical
processor when vcpus are over-committed and logical processors are
saturated. Once the vcpu threads of one VM are scheduled to the same
LP, system performance drops dramatically with some workloads(like
webbench running in windows OS).

   

Were the affine wakeups due to the kernel (emulated guest IPIs) or
qemu?
 

We have basic guesses about the reasone, one is wakeup affinity between vcpu 
threads due to IPI, and the other is wakeup affinity between io theads and vcpu 
threads.
   


It would be good to find out.


Most likely it also hits non-virtualized loads as well.  If the
scheduler pulls two long-running threads to the same cpu, performance
will take a hit.
 

Since the hit only happens when physical cpus are saturated, and sheduling 
non-virtualized multiple threads of one process to same processor can benefit 
the performance due to cache share or other affinities, but you know it hurts 
performance a lot once schedule two vcpu theads to a same processor due to 
mutual spin-lock in guests.
   


Spin loops need to be addressed first, they are known to kill 
performance in overcommit situations.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-12 Thread Zhang, Xiantao
Avi Kivity wrote:
 On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
 
 What was the performance hit?  What was your I/O setup (image
 format, using aio?) 
 
 The issue only happens when vcpu number is over-committed(e.g.
 vcpu/pcpu2) and physical cpus are saturated. For example,  when run
 webbench in windows OS in this case, its performance drops by 80%. 
 In our experiment, we are using image file through virtio, and I
 think aio should be used by default also.
 
 
 Is this on a machine that does pause-loop exits?  The current handing
 of PLE is very suboptimal.  With proper directed yield we should be
 much better there.
 
 Without PLE, we need paravirtualized spinlocks, no way around it.

PLE has the ability to eliminate the issue at some extent, and pv solution 
should be helpful also.  But for windows guests running on machines without 
PLE, we still needs to enhance host side to resolve the issue.

 After analysis about Linux scheduler, we found it is indeed caused
 by the known features of Linux schduler, such as AFFINE_WAKEUPS,
 SYNC_WAKEUPS etc. With these features on, linux schduler often
 tries to schedule the vcpu threads of one guests to one same
 logical processor when vcpus are over-committed and logical
 processors are saturated. Once the vcpu threads of one VM are
 scheduled to the same LP, system performance drops dramatically
 with some workloads(like webbench running in windows OS).
 
 Since the hit only happens when physical cpus are saturated, and
 sheduling non-virtualized multiple threads of one process to same
 processor can benefit the performance due to cache share or other
 affinities, but you know it hurts performance a lot once schedule
 two vcpu theads to a same processor due to mutual spin-lock in
 guests. 
 
 Spin loops need to be addressed first, they are known to kill
 performance in overcommit situations.

Even in overcommit case, if vcpu threads of one qemu are not scheduled or 
pulled to the same logical processor, the performance drop is tolerant like 
Xen's case today. But for KVM, it has to suffer from additional performance 
loss, since host's scheduler actively pulls these vcpu threads together. 
Xiantao 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-11 Thread Zhang, Xiantao
Avi Kivity wrote:
 (copying lkml and some scheduler folk)
 
 On 04/10/2010 11:16 AM, Zhang, Xiantao wrote:
 Hi, all
We are working on the scalability work for KVM guests, and found
one big issue exists in linux scheduler and it may impact guest's
 performance and scalability a lot for some special workloads running
 in VM.  In the current Linux scheduler, there are some features to
 enhance App's performance which are defined in the file
 kvm.git/kernel/sched_features.h. Certainly, they are mostly
 beneficial optimizations to improve system's performance, but
 unluckily, some of them may hurt VM's performance and scalablity in
 KVM case We know that if two or more vcpus of one guests are
 scheduled to one same logical processor,  same CPU utilization may
 generate less valid output due mutual lock in VM's OS than that are
 scheduled to different logical processors  .And we also know that
 VM's vcpus are emulated or executed through the threads of Qemu for
 KVM.  If the vcpu threads of qemu are often pulled to one same
 logical processor by some features of Linux scheduler, kvm
 guests'performance may be hurt a lot.  In our performance testing, 
 the results also show this performance bottleneck due to this issue.
 
 What was the performance hit?  What was your I/O setup (image format,
 using aio?)

The issue only happens when vcpu number is over-committed(e.g. vcpu/pcpu2) and 
physical cpus are saturated. For example,  when run webbench in windows OS in 
this case, its performance drops by 80%.  In our experiment, we are using image 
file through virtio, and I think aio should be used by default also. 


 After analysis about Linux scheduler, we found it is indeed caused
 by the known features of Linux schduler, such as AFFINE_WAKEUPS,
 SYNC_WAKEUPS etc. With these features on, linux schduler often tries
 to schedule the vcpu threads of one guests to one same logical
 processor when vcpus are over-committed and logical processors are
 saturated. Once the vcpu threads of one VM are scheduled to the same
 LP, system performance drops dramatically with some workloads(like
 webbench running in windows OS).   
 
 
 Were the affine wakeups due to the kernel (emulated guest IPIs) or
 qemu? 

We have basic guesses about the reasone, one is wakeup affinity between vcpu 
threads due to IPI, and the other is wakeup affinity between io theads and vcpu 
threads. 

 To verify this finding, we also worked out a simple patch
 attached in the mail to dynamially switch off the two sheduler
 features mentioned above when scheduler knows the scheduling tasks
 are vcpu threads, and we found the the whole system's performance
 and scalability are improved a lot.  Certatinly, this patch is not
 good for upstream, but it can enlighten us to think how to optimize
 Linux scheduler and we also want to initiate the discussion about
 how to make LINUX's scheduler more friendly to virtualization. 
 Besides, this issue maybe not only kvm's special issue, insteadly it
 should be a common issue for host-based VMs, and we also expect that
 we can have an elegant solution to thoroughly resolve the
 performance or scalability gap compared with hypervisor-based VMs.  
 
 
 Most likely it also hits non-virtualized loads as well.  If the
 scheduler pulls two long-running threads to the same cpu, performance
 will take a hit.

Since the hit only happens when physical cpus are saturated, and sheduling 
non-virtualized multiple threads of one process to same processor can benefit 
the performance due to cache share or other affinities, but you know it hurts 
performance a lot once schedule two vcpu theads to a same processor due to 
mutual spin-lock in guests. 
Xiantao
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


VM performance issue in KVM guests.

2010-04-10 Thread Zhang, Xiantao
Hi, all  
  We are working on the scalability work for KVM guests, and found one big 
issue exists in linux scheduler and it may impact guest's performance and 
scalability a lot for some special workloads running in VM.  In the current 
Linux scheduler, there are some features to enhance App's performance which are 
defined in the file kvm.git/kernel/sched_features.h. Certainly, they are mostly 
beneficial optimizations to improve system's performance, but unluckily, some 
of them may hurt VM's performance and scalablity in KVM case 
  We know that if two or more vcpus of one guests are scheduled to one same 
logical processor,  same CPU utilization may generate less valid output due 
mutual lock in VM's OS than that are scheduled to different logical processors  
.And we also know that VM's vcpus are emulated or executed through the threads 
of Qemu for KVM.  If the vcpu threads of qemu are often pulled to one same 
logical processor by some features of Linux scheduler, kvm guests'performance 
may be hurt a lot.  In our performance testing,  the results also show this 
performance bottleneck due to this issue. After analysis about Linux scheduler, 
we found it is indeed caused by the known features of Linux schduler, such as 
AFFINE_WAKEUPS, SYNC_WAKEUPS etc. With these features on, linux schduler often 
tries to schedule the vcpu threads of one guests to one same logical processor 
when vcpus are over-committed and logical processors are saturated. Once the 
vcpu threads of one VM are scheduled to the same LP, system performance drops 
dramatically with some workloads(like webbench running in windows OS).  
   To verify this finding, we also worked out a simple patch attached in the 
mail to dynamially switch off the two sheduler features mentioned above when 
scheduler knows the scheduling tasks are vcpu threads, and we found the the 
whole system's performance and scalability are improved a lot.  Certatinly, 
this patch is not good for upstream, but it can enlighten us to think how to 
optimize Linux scheduler and we also want to initiate the discussion about how 
to make LINUX's scheduler more friendly to virtualization.  Besides, this issue 
maybe not only kvm's special issue, insteadly it should be a common issue for 
host-based VMs, and we also expect that we can have an elegant solution to 
thoroughly resolve the performance or scalability gap compared with 
hypervisor-based VMs.  
Any comments ?
Thanks!
Xiantao

sheduler_issue_fix.patch
Description: sheduler_issue_fix.patch


Re: VM performance issue in KVM guests.

2010-04-10 Thread Avi Kivity

(copying lkml and some scheduler folk)

On 04/10/2010 11:16 AM, Zhang, Xiantao wrote:

Hi, all
   We are working on the scalability work for KVM guests, and found one big 
issue exists in linux scheduler and it may impact guest's performance and 
scalability a lot for some special workloads running in VM.  In the current 
Linux scheduler, there are some features to enhance App's performance which are 
defined in the file kvm.git/kernel/sched_features.h. Certainly, they are mostly 
beneficial optimizations to improve system's performance, but unluckily, some 
of them may hurt VM's performance and scalablity in KVM case
   We know that if two or more vcpus of one guests are scheduled to one same 
logical processor,  same CPU utilization may generate less valid output due 
mutual lock in VM's OS than that are scheduled to different logical processors  
.And we also know that VM's vcpus are emulated or executed through the threads 
of Qemu for KVM.  If the vcpu threads of qemu are often pulled to one same 
logical processor by some features of Linux scheduler, kvm guests'performance 
may be hurt a lot.  In our performance testing,  the results also show this 
performance bottleneck due to this issue.


What was the performance hit?  What was your I/O setup (image format, 
using aio?)



After analysis about Linux scheduler, we found it is indeed caused by the known 
features of Linux schduler, such as AFFINE_WAKEUPS, SYNC_WAKEUPS etc. With 
these features on, linux schduler often tries to schedule the vcpu threads of 
one guests to one same logical processor when vcpus are over-committed and 
logical processors are saturated. Once the vcpu threads of one VM are scheduled 
to the same LP, system performance drops dramatically with some workloads(like 
webbench running in windows OS).
   


Were the affine wakeups due to the kernel (emulated guest IPIs) or qemu?


To verify this finding, we also worked out a simple patch attached in the 
mail to dynamially switch off the two sheduler features mentioned above when 
scheduler knows the scheduling tasks are vcpu threads, and we found the the 
whole system's performance and scalability are improved a lot.  Certatinly, 
this patch is not good for upstream, but it can enlighten us to think how to 
optimize Linux scheduler and we also want to initiate the discussion about how 
to make LINUX's scheduler more friendly to virtualization.  Besides, this issue 
maybe not only kvm's special issue, insteadly it should be a common issue for 
host-based VMs, and we also expect that we can have an elegant solution to 
thoroughly resolve the performance or scalability gap compared with 
hypervisor-based VMs.
   


Most likely it also hits non-virtualized loads as well.  If the 
scheduler pulls two long-running threads to the same cpu, performance 
will take a hit.



--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Strange performance issue wite kvm and XP guests

2009-02-11 Thread Jernej Azarija
Hello,

I'd like to discuss an issue I'm having with KVM on a Windows XP guest.

The hosting system  is a `x86_64 Intel(R) Core(TM)2 Duo CPU T7300 @
2.00GHz GenuineIntel' machine runing the latest (stable) kernel
release and KVM version 83.

The respecitve modules (kvm, kvm_intel) are loaded and in use. The
problem in question is perfromance. While the system boots as fast as
in native mode (for that matter, the instalation was fast too), it
doesn't work OK for general tasks. The system lags very much. The
mouse pointer, for example works only for a few moments and then
freezes. So does the keyboard. Everything stops periodically for about
5-15 seconds after which it works normally for 10-30 seconds. At
moments, the mouse pointer freezes at some location, but I can still
move a copy of the mouse pointer. I've tried to tweak various
settings (from CPU to memory) with no success. The IRC guys on #KVM
then sugested to post the issue here.

I belive that it's hard to understand what's going on from my poor
description so if you have any other questions what should I
try/check, please tell!

Best regards,

Jernej.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio performance issue

2008-09-17 Thread Mark McLoughlin
On Tue, 2008-09-16 at 22:24 +0300, Ben-Ami Yassour wrote:
 On Tue, 2008-09-16 at 09:16 -0500, Anthony Liguori wrote:
  Ben-Ami Yassour wrote:
   I am running virtio with the latest KVM code, and see a significant
   performance issue.
  
   Ping to the host (or any other close machine) reports a 4ms delay.
 
  
  What kvm version and what host kernel version?
  
  It's very easy to mistakenly compile qemu without GSO support too.  You 
  have to make sure that the 2.6.27 if_tun.h is being included by QEMU.
 
 Is there an option to control GSO support? How?

GSO support is unconditionally enabled with model=virtio if
kvm-userspace is built with the correct kernel headers, the host kernel
supports tun/tap's IFF_VNET_HDR extension and if the guest supports GSO.

 I am using the kernel and userspace that I pulled from the kvm tree
 today.
 
 Based on your comment, we checked and the build of the userspace does
 not take if_tun.h from the kernel tree, it takes it from the system
 include files.
 The reason was that the file was not copied as part of the userspace
 build.
 
 To fix this we made the following change:
 diff --git a/kernel/Makefile b/kernel/Makefile
 index 3f5f6da..b81b098 100644
 --- a/kernel/Makefile
 +++ b/kernel/Makefile
 @@ -53,7 +53,7 @@ T = $(subst -sync,,$@)-tmp
  header-sync:
 rm -rf $T
 rsync -R \
 -$(LINUX)/./include/linux/kvm*.h \
 +$(LINUX)/./include/linux/*.h \
  $(LINUX)/./include/asm-*/kvm*.h \

Ouch, looks like we need a fix like this alright - maybe just copy
if_tun.h and virtio*.h ?

 Even with this change and compiling the userspace with the correct
 if_tun.h the results are the same, ping takes 4ms.

GSO shouldn't affect ping latency - it should only affect throughput.

I'd expect ping latency to be in the range of .15ms and .3ms since we
delay our reply for .15ms currently.

Is this a regression? Have you tried bisecting it?

Cheers,
Mark.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio performance issue

2008-09-17 Thread Ben-Ami Yassour
On Wed, 2008-09-17 at 11:49 +0100, Mark McLoughlin wrote:
 On Tue, 2008-09-16 at 22:24 +0300, Ben-Ami Yassour wrote:
  On Tue, 2008-09-16 at 09:16 -0500, Anthony Liguori wrote:
   Ben-Ami Yassour wrote:
I am running virtio with the latest KVM code, and see a significant
performance issue.
   
Ping to the host (or any other close machine) reports a 4ms delay.
  
   
   What kvm version and what host kernel version?
   
   It's very easy to mistakenly compile qemu without GSO support too.  You 
   have to make sure that the 2.6.27 if_tun.h is being included by QEMU.
  
  Is there an option to control GSO support? How?
 
 GSO support is unconditionally enabled with model=virtio if
 kvm-userspace is built with the correct kernel headers, the host kernel
 supports tun/tap's IFF_VNET_HDR extension and if the guest supports GSO.
How can we verify that GSO is actually used?

 GSO shouldn't affect ping latency - it should only affect throughput.
 
 I'd expect ping latency to be in the range of .15ms and .3ms since we
 delay our reply for .15ms currently.
 
 Is this a regression? Have you tried bisecting it?
 

We are not sure yet what it is. We see very high variability in I/O
rates, and have not yet found a combination of version, environment and
parameters that shows good performance reliably. The head of the tree
does show *bad* performance reliably.
We are trying kvm-73 now.

Thanks,
Ben




--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


virtio performance issue

2008-09-16 Thread Ben-Ami Yassour
I am running virtio with the latest KVM code, and see a significant
performance issue.

Ping to the host (or any other close machine) reports a 4ms delay.

In the same setup with an e1000 emulation (just changing model=virtio to
model=e1000 in the KVM command line), ping reports 0.177ms delay.

BTW, initially I saw that the throughput when using netperf is very low,
even from guest to host, even though the CPU utilization is low.

What might be the problem?

Thanks,
Ben

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio performance issue

2008-09-16 Thread Anthony Liguori

Ben-Ami Yassour wrote:

I am running virtio with the latest KVM code, and see a significant
performance issue.

Ping to the host (or any other close machine) reports a 4ms delay.
  


What kvm version and what host kernel version?

It's very easy to mistakenly compile qemu without GSO support too.  You 
have to make sure that the 2.6.27 if_tun.h is being included by QEMU.


Regards,

Anthony Liguori


In the same setup with an e1000 emulation (just changing model=virtio to
model=e1000 in the KVM command line), ping reports 0.177ms delay.

BTW, initially I saw that the throughput when using netperf is very low,
even from guest to host, even though the CPU utilization is low.

What might be the problem?

Thanks,
Ben

  


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio performance issue

2008-09-16 Thread Bernhard Schmidt
Ben-Ami Yassour [EMAIL PROTECTED] wrote:

Hello Ben,

 I am running virtio with the latest KVM code, and see a significant
 performance issue.

 Ping to the host (or any other close machine) reports a 4ms delay.

 In the same setup with an e1000 emulation (just changing model=virtio to
 model=e1000 in the KVM command line), ping reports 0.177ms delay.

 BTW, initially I saw that the throughput when using netperf is very low,
 even from guest to host, even though the CPU utilization is low.

 What might be the problem?

I had exactly the same issue (with an 2.6.26 kernel and kvm-70 though).
In an attempt to make the guest kernel (w/o modules) as small as
possible I had disabled ACPI in its config. Which works, but introduced
the very same 4ms delay you are seeing and made the VM clock go wild
(even worse with KVM_CLOCK). I did not see other speed issues, but I
have to admit I never benchmarked it.

After enabling ACPI the 4ms delay baseline disappeared and the clock in
the guest is now perfectly in sync with the host.

Regards,
Bernhard

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio performance issue

2008-09-16 Thread Ben-Ami Yassour
On Tue, 2008-09-16 at 09:16 -0500, Anthony Liguori wrote:
 Ben-Ami Yassour wrote:
  I am running virtio with the latest KVM code, and see a significant
  performance issue.
 
  Ping to the host (or any other close machine) reports a 4ms delay.

 
 What kvm version and what host kernel version?
 
 It's very easy to mistakenly compile qemu without GSO support too.  You 
 have to make sure that the 2.6.27 if_tun.h is being included by QEMU.

Is there an option to control GSO support? How?

I am using the kernel and userspace that I pulled from the kvm tree
today.

Based on your comment, we checked and the build of the userspace does
not take if_tun.h from the kernel tree, it takes it from the system
include files.
The reason was that the file was not copied as part of the userspace
build.

To fix this we made the following change:
diff --git a/kernel/Makefile b/kernel/Makefile
index 3f5f6da..b81b098 100644
--- a/kernel/Makefile
+++ b/kernel/Makefile
@@ -53,7 +53,7 @@ T = $(subst -sync,,$@)-tmp
 header-sync:
rm -rf $T
rsync -R \
-$(LINUX)/./include/linux/kvm*.h \
+$(LINUX)/./include/linux/*.h \
 $(LINUX)/./include/asm-*/kvm*.h \

Even with this change and compiling the userspace with the correct
if_tun.h the results are the same, ping takes 4ms.

What could be the reason?

Thanks,
Ben


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html