Re: KVM-Guest i/o performance !

2012-03-29 Thread Stefan Hajnoczi
the host has 8 GB RAM (more page cache). It's best to eliminate page cache if you are trying to look at actual I/O performance rather than overall system performance. Please post your kvm command-line so we can see the guest's configuration in detail. If you are interested in unde

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Peter Lieven
On 21.02.2012 17:48, Vadim Rozenfeld wrote: - Original Message - From: "Peter Lieven" To: "Vadim Rozenfeld" Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, "Gleb Natapov" Sent: Tuesday, February 21, 2012 4:10:22 PM Subject: Re: win7 bad i/o performance,

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Vadim Rozenfeld
- Original Message - From: "Peter Lieven" To: "Vadim Rozenfeld" Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, "Gleb Natapov" Sent: Tuesday, February 21, 2012 4:10:22 PM Subject: Re: win7 bad i/o performance, high insn_emulation and exists On 21.02.201

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Peter Lieven
On 21.02.2012 14:56, Vadim Rozenfeld wrote: - Original Message - From: "Peter Lieven" To: "Gleb Natapov" Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, vroze...@redhat.com Sent: Tuesday, February 21, 2012 2:05:25 PM Subject: Re: win7 bad i/o performance, high insn_

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Vadim Rozenfeld
- Original Message - From: "Peter Lieven" To: "Gleb Natapov" Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, vroze...@redhat.com Sent: Tuesday, February 21, 2012 2:05:25 PM Subject: Re: win7 bad i/o performance, high insn_emulation and exists On 21.02.2012 12:46

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Peter Lieven
On 21.02.2012 12:46, Gleb Natapov wrote: On Tue, Feb 21, 2012 at 12:16:16PM +0100, Peter Lieven wrote: On 21.02.2012 12:00, Gleb Natapov wrote: On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote: On 21.02.2012 11:56, Gleb Natapov wrote: On Tue, Feb 21, 2012 at 11:50:47AM +0100, Pete

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Gleb Natapov
On Tue, Feb 21, 2012 at 12:16:16PM +0100, Peter Lieven wrote: > On 21.02.2012 12:00, Gleb Natapov wrote: > >On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote: > >>On 21.02.2012 11:56, Gleb Natapov wrote: > >>>On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote: > >I hope it

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Peter Lieven
On 21.02.2012 12:00, Gleb Natapov wrote: On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote: On 21.02.2012 11:56, Gleb Natapov wrote: On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote: I hope it will make Windows use TSC instead, but you can't be sure about anything with W

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Gleb Natapov
On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote: > On 21.02.2012 11:56, Gleb Natapov wrote: > >On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote: > >>>I hope it will make Windows use TSC instead, but you can't be sure > >>>about anything with Windows :( > >>Whatever it does

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Peter Lieven
On 21.02.2012 11:56, Gleb Natapov wrote: On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote: I hope it will make Windows use TSC instead, but you can't be sure about anything with Windows :( Whatever it does now it eates more CPU has almost equal number of exits and throughput is abou

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Gleb Natapov
On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote: > >I hope it will make Windows use TSC instead, but you can't be sure > >about anything with Windows :( > Whatever it does now it eates more CPU has almost equal > number of exits and throughput is about the same (15MB/s). > If pmtimer i

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-21 Thread Peter Lieven
On 20.02.2012 21:45, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 08:59:38PM +0100, Peter Lieven wrote: On 20.02.2012 20:04, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: Hi, I came a across an i

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Gleb Natapov
On Mon, Feb 20, 2012 at 08:59:38PM +0100, Peter Lieven wrote: > On 20.02.2012 20:04, Gleb Natapov wrote: > >On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote: > >>On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: > >>>Hi, > >>> > >>>I came a across an issue with a Windows 7 (

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Gleb Natapov
On Mon, Feb 20, 2012 at 08:15:15PM +0100, Peter Lieven wrote: > On 20.02.2012 19:40, Gleb Natapov wrote: > >On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: > >>Hi, > >> > >>I came a across an issue with a Windows 7 (32-bit) as well as with a > >>Windows 2008 R2 (64-bit) guest. > >> >

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Peter Lieven
On 20.02.2012 20:04, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: Hi, I came a across an issue with a Windows 7 (32-bit) as well as with a Windows 2008 R2 (64-bit) guest. If I transfer a file fr

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Peter Lieven
On 20.02.2012 20:04, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: Hi, I came a across an issue with a Windows 7 (32-bit) as well as with a Windows 2008 R2 (64-bit) guest. If I transfer a file fr

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Peter Lieven
On 20.02.2012 19:40, Gleb Natapov wrote: On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: Hi, I came a across an issue with a Windows 7 (32-bit) as well as with a Windows 2008 R2 (64-bit) guest. If I transfer a file from the VM via CIFS or FTP to a remote machine, i get very poor

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Gleb Natapov
On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote: > On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: > > Hi, > > > > I came a across an issue with a Windows 7 (32-bit) as well as with a > > Windows 2008 R2 (64-bit) guest. > > > > If I transfer a file from the VM via CIFS

Re: win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Gleb Natapov
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote: > Hi, > > I came a across an issue with a Windows 7 (32-bit) as well as with a > Windows 2008 R2 (64-bit) guest. > > If I transfer a file from the VM via CIFS or FTP to a remote machine, > i get very poor read performance (around 13MB/

win7 bad i/o performance, high insn_emulation and exists

2012-02-20 Thread Peter Lieven
Hi, I came a across an issue with a Windows 7 (32-bit) as well as with a Windows 2008 R2 (64-bit) guest. If I transfer a file from the VM via CIFS or FTP to a remote machine, i get very poor read performance (around 13MB/s). The VM peaks at 100% cpu and I see a lot of insn_emulations and all k

Re: I/O Performance Tips

2010-12-09 Thread Stefan Hajnoczi
On Thu, Dec 9, 2010 at 12:52 PM, Sebastian Nickel - Hetzner Online AG wrote: > here is the qemu command line we are using (or which libvirt generates): > > /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp > 1,sockets=1,cores=1,threads=1 -name vm-933 -uuid > 0d737610-e59b-012d-f453-32287f7402ab -

Re: I/O Performance Tips

2010-12-09 Thread Sebastian Nickel - Hetzner Online AG
Hello, > I agree with Robert, the QEMU command-line used to launch these guests > would be useful (on host: ps aux | grep kvm). Have you explicitly set > a -drive cache= mode (like "none", "writeback", or "writethrough")? here is the qemu command line we are using (or which libvirt generates):

Re: I/O Performance Tips

2010-12-09 Thread Stefan Hajnoczi
On Thu, Dec 9, 2010 at 8:10 AM, Sebastian Nickel - Hetzner Online AG wrote: > Hello, > we have got some issues with I/O in our kvm environment. We are using > kernel version 2.6.32 (Ubuntu 10.04 LTS) to virtualise our hosts and we > are using ksm, too. Recently we noticed that sometimes the guest

Re: I/O Performance Tips

2010-12-09 Thread RW
We've don't use Ubuntu (we use Gentoo/KVM 0.12.5) but we've had a similar problem with kernels <2.6.32-r11 (this is Gentoo specific and means update -r11 and not release candidate 11) and with the first three releases of 2.6.34 especially when NFS was involved. We're currently running 2.6.32-r1

I/O Performance Tips

2010-12-09 Thread Sebastian Nickel - Hetzner Online AG
Hello, we have got some issues with I/O in our kvm environment. We are using kernel version 2.6.32 (Ubuntu 10.04 LTS) to virtualise our hosts and we are using ksm, too. Recently we noticed that sometimes the guest systems (mainly OpenSuse guest systems) suddenly have a read only filesystem. After s

Re: I/O performance of VirtIO

2009-10-26 Thread Avi Kivity
On 10/26/2009 10:12 AM, Jan Kiszka wrote: No. Dedicated I/O threads provide parallelism. All latency needs is to have SIGIO sent on all file descriptors (or rather, in qemu-kvm with irqchip, to have all file descriptors in the poll() call). Jan, does slirp add new connections to the select set

Re: I/O performance of VirtIO

2009-10-26 Thread Jan Kiszka
Avi Kivity wrote: > On 10/23/2009 12:06 AM, Alexander Graf wrote: >> >> Am 22.10.2009 um 18:29 schrieb Avi Kivity : >> >>> On 10/13/2009 08:35 AM, Jan Kiszka wrote: It can be particularly slow if you use in-kernel irqchips and the default NIC emulation (up to 10 times slower), some effect

Re: I/O performance of VirtIO

2009-10-24 Thread Avi Kivity
On 10/23/2009 12:06 AM, Alexander Graf wrote: Am 22.10.2009 um 18:29 schrieb Avi Kivity : On 10/13/2009 08:35 AM, Jan Kiszka wrote: It can be particularly slow if you use in-kernel irqchips and the default NIC emulation (up to 10 times slower), some effect I always wanted to understand on a r

Re: I/O performance of VirtIO

2009-10-22 Thread Alexander Graf
Am 22.10.2009 um 18:29 schrieb Avi Kivity : On 10/13/2009 08:35 AM, Jan Kiszka wrote: It can be particularly slow if you use in-kernel irqchips and the default NIC emulation (up to 10 times slower), some effect I always wanted to understand on a rainy day. So, when you actually want -net user,

Re: I/O performance of VirtIO

2009-10-22 Thread Avi Kivity
On 10/13/2009 08:35 AM, Jan Kiszka wrote: It can be particularly slow if you use in-kernel irqchips and the default NIC emulation (up to 10 times slower), some effect I always wanted to understand on a rainy day. So, when you actually want -net user, try -no-kvm-irqchip. This might be due t

Re: I/O performance of VirtIO

2009-10-13 Thread Jan Kiszka
Michael Tokarev wrote: > René Pfeiffer wrote: >> Hello! >> >> I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I >> noticed that the I/O performance of an unattended stock Debian Lenny >> install dropped somehow. The test machines ran with k

Re: I/O performance of VirtIO

2009-10-12 Thread René Pfeiffer
On Oct 13, 2009 at 0145 +0400, Michael Tokarev appeared and said: > René Pfeiffer wrote: > >Hello! > > > >I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I > >noticed that the I/O performance of an unattended stock Debian Lenny > >install d

Re: I/O performance of VirtIO

2009-10-12 Thread Michael Tokarev
René Pfeiffer wrote: Hello! I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I noticed that the I/O performance of an unattended stock Debian Lenny install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x before. The difference is very noticeable (went from

I/O performance of VirtIO

2009-10-12 Thread René Pfeiffer
Hello! I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I noticed that the I/O performance of an unattended stock Debian Lenny install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x before. The difference is very noticeable (went from about 5 minutes up to 15

RE: Network I/O performance

2009-05-20 Thread Fischer, Anna
> Subject: Re: Network I/O performance > > Fischer, Anna wrote: > >> Subject: Re: Network I/O performance > >> > >> Fischer, Anna wrote: > >> > >>> I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I > use > >>

tun/tap and Vlans (was: Re: Network I/O performance)

2009-05-19 Thread Lukas Kolbe
Hi all, On a sidenote: > > I have also realized that when using the tun/tap configuration with > > a bridge, packets are replicated on all tap devices when QEMU writes > > packets to the tun interface. I guess this is a limitation of > > tun/tap as it does not know to which tap device the packet

Re: Network I/O performance

2009-05-18 Thread Avi Kivity
Herbert Xu wrote: Yes, there's a known issue with UDP, where we don't report congestion and the queues start dropping packets. There's a patch for tun queued for the next merge window; you'll need a 2.6.31 host for that IIRC (Herbert?) It should be in 2.6.30 in fact. However, this i

Re: Network I/O performance

2009-05-18 Thread Herbert Xu
On Mon, May 18, 2009 at 12:14:34AM +0300, Avi Kivity wrote: > >> No, I am measuring UDP throughput performance. I have now tried using a >> different NIC model, and the e1000 model seems to achieve slightly >> better performance (CPU goes up to 110% only though). I have also been >> running virt

Re: Network I/O performance

2009-05-17 Thread Avi Kivity
Fischer, Anna wrote: Subject: Re: Network I/O performance Fischer, Anna wrote: I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use the tun/tap device model and the Linux bridge kernel module to connect my VM to the network. I have 2 10G Intel 82598 network devices

RE: Network I/O performance

2009-05-13 Thread Fischer, Anna
> Subject: Re: Network I/O performance > > Fischer, Anna wrote: > > I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use > the tun/tap device model and the Linux bridge kernel module to connect > my VM to the network. I have 2 10G Intel 82598 network devi

Re: Network I/O performance

2009-05-13 Thread Avi Kivity
Fischer, Anna wrote: I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use the tun/tap device model and the Linux bridge kernel module to connect my VM to the network. I have 2 10G Intel 82598 network devices (with the ixgbe driver) attached to my machine and I want to do packet

Network I/O performance

2009-05-11 Thread Fischer, Anna
I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use the tun/tap device model and the Linux bridge kernel module to connect my VM to the network. I have 2 10G Intel 82598 network devices (with the ixgbe driver) attached to my machine and I want to do packet routing in my VM (the

Re: Poor Write I/O Performance on KVM-79

2009-01-05 Thread Thomas Mueller
>> > if you use qcow2 image with kvm-79 you have to use cache=writeback > parameter to get some speed. ok, not writeback but cache=writethrough ... -Thomas -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majord...@vger.kernel.org More majordomo inf

Re: Poor Write I/O Performance on KVM-79

2009-01-05 Thread Thomas Mueller
On Sat, 03 Jan 2009 22:03:20 -0800, Alexander Atticus wrote: > Hello! > > I have been experimenting with KVM and have been experiencing poor write > I/O performance. I'm not sure whether I'm doing something wrong or if > this is just the current state of things. >

Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Rodrigo Campos
On Sun, Jan 4, 2009 at 5:48 PM, Avi Kivity wrote: > Rodrigo Campos wrote: >>> >>> qcow2 will surely lead to miserable performance. raw files are better. >>> best is to use lvm. >>> >>> >> >> What do you mean with best is to use lvm ? >> You just say to use raw images on an lvm partition because

Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Avi Kivity
Rodrigo Campos wrote: qcow2 will surely lead to miserable performance. raw files are better. best is to use lvm. What do you mean with best is to use lvm ? You just say to use raw images on an lvm partition because you can easily resize it ? Or somehow images only use the used space of

Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Florent
Rodrigo, KVM can use Logical Volume as a field of bytes to store the back-end of the guest HDD. Doing this, the overhead of the partition format is avoided at host side and things should be faster. - Florent -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a m

Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Rodrigo Campos
On Sun, Jan 4, 2009 at 11:24 AM, Avi Kivity wrote: > Alexander Atticus wrote: >> >> Hello! >> >> I have been experimenting with KVM and have been experiencing poor write >> I/O >> performance. I'm not sure whether I'm doing something wrong

Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Avi Kivity
Alexander Atticus wrote: Hello! I have been experimenting with KVM and have been experiencing poor write I/O performance. I'm not sure whether I'm doing something wrong or if this is just the current state of things. While writing to the local array on the node running the guests I

Poor Write I/O Performance on KVM-79

2009-01-03 Thread Alexander Atticus
Hello! I have been experimenting with KVM and have been experiencing poor write I/O performance. I'm not sure whether I'm doing something wrong or if this is just the current state of things. While writing to the local array on the node running the guests I get about 200MB/s from

Re: I/O performance

2008-08-19 Thread xming
No one has any idea? On Sun, Aug 10, 2008 at 8:21 PM, xming <[EMAIL PROTECTED]> wrote: > Hi all, > > After that I migrated my home sever from xen to kvm, I noticed very > bad I/O performance, > tried many combination versions/scsi-ide-virtio I got best by using > kernel

I/O performance

2008-08-10 Thread xming
Hi all, After that I migrated my home sever from xen to kvm, I noticed very bad I/O performance, tried many combination versions/scsi-ide-virtio I got best by using kernel mod. 70 with userspace 69 and virtio. Now I have upgraded to 2.6.26.2 and kvm-72, and the performance has dropped again. SO