the host has 8 GB RAM (more page
cache). It's best to eliminate page cache if you are trying to look at
actual I/O performance rather than overall system performance.
Please post your kvm command-line so we can see the guest's
configuration in detail.
If you are interested in unde
On 21.02.2012 17:48, Vadim Rozenfeld wrote:
- Original Message -
From: "Peter Lieven"
To: "Vadim Rozenfeld"
Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, "Gleb Natapov"
Sent: Tuesday, February 21, 2012 4:10:22 PM
Subject: Re: win7 bad i/o performance,
- Original Message -
From: "Peter Lieven"
To: "Vadim Rozenfeld"
Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, "Gleb Natapov"
Sent: Tuesday, February 21, 2012 4:10:22 PM
Subject: Re: win7 bad i/o performance, high insn_emulation and exists
On 21.02.201
On 21.02.2012 14:56, Vadim Rozenfeld wrote:
- Original Message -
From: "Peter Lieven"
To: "Gleb Natapov"
Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, vroze...@redhat.com
Sent: Tuesday, February 21, 2012 2:05:25 PM
Subject: Re: win7 bad i/o performance, high insn_
- Original Message -
From: "Peter Lieven"
To: "Gleb Natapov"
Cc: qemu-de...@nongnu.org, kvm@vger.kernel.org, vroze...@redhat.com
Sent: Tuesday, February 21, 2012 2:05:25 PM
Subject: Re: win7 bad i/o performance, high insn_emulation and exists
On 21.02.2012 12:46
On 21.02.2012 12:46, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 12:16:16PM +0100, Peter Lieven wrote:
On 21.02.2012 12:00, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote:
On 21.02.2012 11:56, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:50:47AM +0100, Pete
On Tue, Feb 21, 2012 at 12:16:16PM +0100, Peter Lieven wrote:
> On 21.02.2012 12:00, Gleb Natapov wrote:
> >On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote:
> >>On 21.02.2012 11:56, Gleb Natapov wrote:
> >>>On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote:
> >I hope it
On 21.02.2012 12:00, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote:
On 21.02.2012 11:56, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote:
I hope it will make Windows use TSC instead, but you can't be sure
about anything with W
On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote:
> On 21.02.2012 11:56, Gleb Natapov wrote:
> >On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote:
> >>>I hope it will make Windows use TSC instead, but you can't be sure
> >>>about anything with Windows :(
> >>Whatever it does
On 21.02.2012 11:56, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote:
I hope it will make Windows use TSC instead, but you can't be sure
about anything with Windows :(
Whatever it does now it eates more CPU has almost equal
number of exits and throughput is abou
On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote:
> >I hope it will make Windows use TSC instead, but you can't be sure
> >about anything with Windows :(
> Whatever it does now it eates more CPU has almost equal
> number of exits and throughput is about the same (15MB/s).
> If pmtimer i
On 20.02.2012 21:45, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 08:59:38PM +0100, Peter Lieven wrote:
On 20.02.2012 20:04, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an i
On Mon, Feb 20, 2012 at 08:59:38PM +0100, Peter Lieven wrote:
> On 20.02.2012 20:04, Gleb Natapov wrote:
> >On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
> >>On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
> >>>Hi,
> >>>
> >>>I came a across an issue with a Windows 7 (
On Mon, Feb 20, 2012 at 08:15:15PM +0100, Peter Lieven wrote:
> On 20.02.2012 19:40, Gleb Natapov wrote:
> >On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
> >>Hi,
> >>
> >>I came a across an issue with a Windows 7 (32-bit) as well as with a
> >>Windows 2008 R2 (64-bit) guest.
> >>
>
On 20.02.2012 20:04, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file fr
On 20.02.2012 20:04, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file fr
On 20.02.2012 19:40, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file from the VM via CIFS or FTP to a remote machine,
i get very poor
On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
> On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
> > Hi,
> >
> > I came a across an issue with a Windows 7 (32-bit) as well as with a
> > Windows 2008 R2 (64-bit) guest.
> >
> > If I transfer a file from the VM via CIFS
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
> Hi,
>
> I came a across an issue with a Windows 7 (32-bit) as well as with a
> Windows 2008 R2 (64-bit) guest.
>
> If I transfer a file from the VM via CIFS or FTP to a remote machine,
> i get very poor read performance (around 13MB/
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file from the VM via CIFS or FTP to a remote machine,
i get very poor read performance (around 13MB/s). The VM peaks at 100%
cpu and I see a lot of insn_emulations and all k
On Thu, Dec 9, 2010 at 12:52 PM, Sebastian Nickel - Hetzner Online AG
wrote:
> here is the qemu command line we are using (or which libvirt generates):
>
> /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp
> 1,sockets=1,cores=1,threads=1 -name vm-933 -uuid
> 0d737610-e59b-012d-f453-32287f7402ab -
Hello,
> I agree with Robert, the QEMU command-line used to launch these guests
> would be useful (on host: ps aux | grep kvm). Have you explicitly set
> a -drive cache= mode (like "none", "writeback", or "writethrough")?
here is the qemu command line we are using (or which libvirt generates):
On Thu, Dec 9, 2010 at 8:10 AM, Sebastian Nickel - Hetzner Online AG
wrote:
> Hello,
> we have got some issues with I/O in our kvm environment. We are using
> kernel version 2.6.32 (Ubuntu 10.04 LTS) to virtualise our hosts and we
> are using ksm, too. Recently we noticed that sometimes the guest
We've don't use Ubuntu (we use Gentoo/KVM 0.12.5) but we've had a
similar problem with kernels <2.6.32-r11 (this is Gentoo specific and
means update -r11 and not release candidate 11) and with the first three
releases of 2.6.34 especially when NFS was involved. We're currently
running 2.6.32-r1
Hello,
we have got some issues with I/O in our kvm environment. We are using
kernel version 2.6.32 (Ubuntu 10.04 LTS) to virtualise our hosts and we
are using ksm, too. Recently we noticed that sometimes the guest systems
(mainly OpenSuse guest systems) suddenly have a read only filesystem.
After s
On 10/26/2009 10:12 AM, Jan Kiszka wrote:
No. Dedicated I/O threads provide parallelism. All latency needs is to
have SIGIO sent on all file descriptors (or rather, in qemu-kvm with
irqchip, to have all file descriptors in the poll() call).
Jan, does slirp add new connections to the select set
Avi Kivity wrote:
> On 10/23/2009 12:06 AM, Alexander Graf wrote:
>>
>> Am 22.10.2009 um 18:29 schrieb Avi Kivity :
>>
>>> On 10/13/2009 08:35 AM, Jan Kiszka wrote:
It can be particularly slow if you use in-kernel irqchips and the
default NIC emulation (up to 10 times slower), some effect
On 10/23/2009 12:06 AM, Alexander Graf wrote:
Am 22.10.2009 um 18:29 schrieb Avi Kivity :
On 10/13/2009 08:35 AM, Jan Kiszka wrote:
It can be particularly slow if you use in-kernel irqchips and the
default NIC emulation (up to 10 times slower), some effect I always
wanted to understand on a r
Am 22.10.2009 um 18:29 schrieb Avi Kivity :
On 10/13/2009 08:35 AM, Jan Kiszka wrote:
It can be particularly slow if you use in-kernel irqchips and the
default NIC emulation (up to 10 times slower), some effect I always
wanted to understand on a rainy day. So, when you actually want -net
user,
On 10/13/2009 08:35 AM, Jan Kiszka wrote:
It can be particularly slow if you use in-kernel irqchips and the
default NIC emulation (up to 10 times slower), some effect I always
wanted to understand on a rainy day. So, when you actually want -net
user, try -no-kvm-irqchip.
This might be due t
Michael Tokarev wrote:
> René Pfeiffer wrote:
>> Hello!
>>
>> I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
>> noticed that the I/O performance of an unattended stock Debian Lenny
>> install dropped somehow. The test machines ran with k
On Oct 13, 2009 at 0145 +0400, Michael Tokarev appeared and said:
> René Pfeiffer wrote:
> >Hello!
> >
> >I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
> >noticed that the I/O performance of an unattended stock Debian Lenny
> >install d
René Pfeiffer wrote:
Hello!
I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
noticed that the I/O performance of an unattended stock Debian Lenny
install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x
before. The difference is very noticeable (went from
Hello!
I just tested qemu-kvm-0.11.0 with the KVM module of kernel 2.6.31.1. I
noticed that the I/O performance of an unattended stock Debian Lenny
install dropped somehow. The test machines ran with kvm-88 and 2.6.30.x
before. The difference is very noticeable (went from about 5 minutes up
to 15
> Subject: Re: Network I/O performance
>
> Fischer, Anna wrote:
> >> Subject: Re: Network I/O performance
> >>
> >> Fischer, Anna wrote:
> >>
> >>> I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I
> use
> >>
Hi all,
On a sidenote:
> > I have also realized that when using the tun/tap configuration with
> > a bridge, packets are replicated on all tap devices when QEMU writes
> > packets to the tun interface. I guess this is a limitation of
> > tun/tap as it does not know to which tap device the packet
Herbert Xu wrote:
Yes, there's a known issue with UDP, where we don't report congestion
and the queues start dropping packets. There's a patch for tun queued
for the next merge window; you'll need a 2.6.31 host for that IIRC
(Herbert?)
It should be in 2.6.30 in fact. However, this i
On Mon, May 18, 2009 at 12:14:34AM +0300, Avi Kivity wrote:
>
>> No, I am measuring UDP throughput performance. I have now tried using a
>> different NIC model, and the e1000 model seems to achieve slightly
>> better performance (CPU goes up to 110% only though). I have also been
>> running virt
Fischer, Anna wrote:
Subject: Re: Network I/O performance
Fischer, Anna wrote:
I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use
the tun/tap device model and the Linux bridge kernel module to connect
my VM to the network. I have 2 10G Intel 82598 network devices
> Subject: Re: Network I/O performance
>
> Fischer, Anna wrote:
> > I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use
> the tun/tap device model and the Linux bridge kernel module to connect
> my VM to the network. I have 2 10G Intel 82598 network devi
Fischer, Anna wrote:
I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use the
tun/tap device model and the Linux bridge kernel module to connect my VM to the
network. I have 2 10G Intel 82598 network devices (with the ixgbe driver)
attached to my machine and I want to do packet
I am running KVM with Fedora Core 8 on a 2.6.23 32-bit kernel. I use the
tun/tap device model and the Linux bridge kernel module to connect my VM to the
network. I have 2 10G Intel 82598 network devices (with the ixgbe driver)
attached to my machine and I want to do packet routing in my VM (the
>>
> if you use qcow2 image with kvm-79 you have to use cache=writeback
> parameter to get some speed.
ok, not writeback but cache=writethrough ...
-Thomas
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo inf
On Sat, 03 Jan 2009 22:03:20 -0800, Alexander Atticus wrote:
> Hello!
>
> I have been experimenting with KVM and have been experiencing poor write
> I/O performance. I'm not sure whether I'm doing something wrong or if
> this is just the current state of things.
>
On Sun, Jan 4, 2009 at 5:48 PM, Avi Kivity wrote:
> Rodrigo Campos wrote:
>>>
>>> qcow2 will surely lead to miserable performance. raw files are better.
>>> best is to use lvm.
>>>
>>>
>>
>> What do you mean with best is to use lvm ?
>> You just say to use raw images on an lvm partition because
Rodrigo Campos wrote:
qcow2 will surely lead to miserable performance. raw files are better.
best is to use lvm.
What do you mean with best is to use lvm ?
You just say to use raw images on an lvm partition because you can
easily resize it ? Or somehow images only use the used space of
Rodrigo,
KVM can use Logical Volume as a field of bytes to store the back-end of
the guest HDD. Doing this, the overhead of the partition format is
avoided at host side and things should be faster.
- Florent
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a m
On Sun, Jan 4, 2009 at 11:24 AM, Avi Kivity wrote:
> Alexander Atticus wrote:
>>
>> Hello!
>>
>> I have been experimenting with KVM and have been experiencing poor write
>> I/O
>> performance. I'm not sure whether I'm doing something wrong
Alexander Atticus wrote:
Hello!
I have been experimenting with KVM and have been experiencing poor write I/O
performance. I'm not sure whether I'm doing something wrong or if this is
just the current state of things.
While writing to the local array on the node running the guests I
Hello!
I have been experimenting with KVM and have been experiencing poor write I/O
performance. I'm not sure whether I'm doing something wrong or if this is
just the current state of things.
While writing to the local array on the node running the guests I get about
200MB/s from
No one has any idea?
On Sun, Aug 10, 2008 at 8:21 PM, xming <[EMAIL PROTECTED]> wrote:
> Hi all,
>
> After that I migrated my home sever from xen to kvm, I noticed very
> bad I/O performance,
> tried many combination versions/scsi-ide-virtio I got best by using
> kernel
Hi all,
After that I migrated my home sever from xen to kvm, I noticed very
bad I/O performance,
tried many combination versions/scsi-ide-virtio I got best by using
kernel mod. 70 with
userspace 69 and virtio.
Now I have upgraded to 2.6.26.2 and kvm-72, and the performance has
dropped again. SO
52 matches
Mail list logo