Re: Poor Write I/O Performance on KVM-79

2009-01-05 Thread Thomas Mueller
On Sat, 03 Jan 2009 22:03:20 -0800, Alexander Atticus wrote:

 Hello!
 
 I have been experimenting with KVM and have been experiencing poor write
 I/O performance.  I'm not sure whether I'm doing something wrong or if
 this is just the current state of things.
 
 While writing to the local array on the node running the guests I get
 about 200MB/s from dd (bs=1M count=1000) or about 90MB/s write
 performance from iozone (sequencial) when I write to a 2G file with a
 16M record length.  The node is an 8 disk system using 3ware in a RAID50
 configuration.  It has 8GB of RAM.
 
 The guests get much slower disk access. The guests are using file based
 backends (tried both qcow2 and raw) with virtio support.  With no other
 activity on the machine, I get about 6 to 7MB/s write performance from
 iozone with the same test. Guests are running Debian lenny/sid with
 2.6.26-1-686.
 
 ...
 
 KVM command to launch guest:
 
 # /usr/bin/kvm -S -M pc -m 1024 -smp 1 -name demo4 \ -uuid
 5b474147-f581-9a21-ac7d-cdd0ce881c5c -monitor pty -boot c \ -drive
 file=/iso/debian-testing-i386-netinst.iso,if=ide,media=cdrom,index=2 \
 -drive file=/srv/demo/demo4.img,if=virtio,index=0,boot=on -net \
 nic,macaddr=00:16:16:64:e6:de,vlan=0 -net \
 tap,fd=15,script=,vlan=0,ifname=vnet1 -serial pty -parallel none \ -usb
 -vnc 0.0.0.0:1 -k en-us


if you use qcow2 image with kvm-79 you have to use cache=writeback 
parameter to get some speed.

see also this post from Anthony Liguori:
http://thread.gmane.org/gmane.comp.emulators.kvm.devel/24765/focus=24811

- Thomas

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Avi Kivity

Alexander Atticus wrote:

Hello!

I have been experimenting with KVM and have been experiencing poor write I/O
performance.  I'm not sure whether I'm doing something wrong or if this is
just the current state of things.

While writing to the local array on the node running the guests I get about
200MB/s from dd (bs=1M count=1000) or about 90MB/s write performance from
iozone (sequencial) when I write to a 2G file with a 16M record length.  The
node is an 8 disk system using 3ware in a RAID50 configuration.  It has 8GB
of RAM.

The guests get much slower disk access. The guests are using file based
backends (tried both qcow2 and raw) with virtio support.  With no other
activity on the machine, I get about 6 to 7MB/s write performance from
iozone with the same test. Guests are running Debian lenny/sid with
2.6.26-1-686.
  


qcow2 will surely lead to miserable performance.  raw files are better.  
best is to use lvm.



I don't know whether this is because of context switching or what.  Again,
I'm wondering how I can improve this performance or if there is something
I am doing wrong.  As a side note, I have also noticed some weirdness with
qcow2 files; some windows installations freeze and some disk corruption
running iozone on Linux guests.  All problems go away when I switch to raw
image files though.

I realize I take a hit by running file-based backends, and that the tests
aren't altogether accurate because with 8GB of RAM, I'm not saturating the
cache but they are still very disparate in numbers which concerns me.

Finally, does anyone know if KVM is now fully supporting SCSI pass-through
in KVM-79? Does this mean that I would vastly reduce context switching by
using an LVM backend device for guests or am I misunderstanding the benefits
of pass-through?
  


scsi is much improved in kvm-82 though it still needs a lot more 
testing.  scsi pass-through is almost completely untested.


What is the kernel version in the guest? IIRC there were some serious 
limitations on the request size that were recently removed.



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Rodrigo Campos
On Sun, Jan 4, 2009 at 11:24 AM, Avi Kivity a...@redhat.com wrote:
 Alexander Atticus wrote:

 Hello!

 I have been experimenting with KVM and have been experiencing poor write
 I/O
 performance.  I'm not sure whether I'm doing something wrong or if this is
 just the current state of things.

 While writing to the local array on the node running the guests I get
 about
 200MB/s from dd (bs=1M count=1000) or about 90MB/s write performance from
 iozone (sequencial) when I write to a 2G file with a 16M record length.
  The
 node is an 8 disk system using 3ware in a RAID50 configuration.  It has
 8GB
 of RAM.

 The guests get much slower disk access. The guests are using file based
 backends (tried both qcow2 and raw) with virtio support.  With no other
 activity on the machine, I get about 6 to 7MB/s write performance from
 iozone with the same test. Guests are running Debian lenny/sid with
 2.6.26-1-686.


 qcow2 will surely lead to miserable performance.  raw files are better.
  best is to use lvm.


What do you mean with best is to use lvm ?
You just say to use raw images on an lvm partition because you can
easily resize it ? Or somehow images only use the used space of the
raw file when used with lvm ? Or there's some trick to make it ?





Thanks a lot,

Rodrigo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Florent

Rodrigo,

KVM can use Logical Volume as a field of bytes to store the back-end of 
the guest HDD. Doing this, the overhead of the partition format is 
avoided at host side and things should be faster.


- Florent

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Avi Kivity

Rodrigo Campos wrote:

qcow2 will surely lead to miserable performance.  raw files are better.
 best is to use lvm.




What do you mean with best is to use lvm ?
You just say to use raw images on an lvm partition because you can
easily resize it ? Or somehow images only use the used space of the
raw file when used with lvm ? Or there's some trick to make it ?
  


Using lvm directly (-drive file=/dev/vg/volume) is both most efficient 
and most reliable, as there are only a small amount of layers involved.  
However, you need to commit space in advance (you can grow your volume, 
but that takes guest involvement and cannot be done online at the moment).


Using a raw file over a filesystem will be slow since the host 
filesystem will be exercised, and due to fragmentation.  Raw files only 
occupy storage as they are used, but they are difficult to manage 
compared to qcow2 files.


Qcow2 files are most flexible, but the least performing.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Poor Write I/O Performance on KVM-79

2009-01-04 Thread Rodrigo Campos
On Sun, Jan 4, 2009 at 5:48 PM, Avi Kivity a...@redhat.com wrote:
 Rodrigo Campos wrote:

 qcow2 will surely lead to miserable performance.  raw files are better.
  best is to use lvm.



 What do you mean with best is to use lvm ?
 You just say to use raw images on an lvm partition because you can
 easily resize it ? Or somehow images only use the used space of the
 raw file when used with lvm ? Or there's some trick to make it ?


 Using lvm directly (-drive file=/dev/vg/volume) is both most efficient and
 most reliable, as there are only a small amount of layers involved.
  However, you need to commit space in advance (you can grow your volume, but
 that takes guest involvement and cannot be done online at the moment).

 Using a raw file over a filesystem will be slow since the host filesystem
 will be exercised, and due to fragmentation.  Raw files only occupy storage
 as they are used, but they are difficult to manage compared to qcow2 files.

 Qcow2 files are most flexible, but the least performing.

Ahhh. Thanks a lot for the explanation :)


Rodrigo
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Poor Write I/O Performance on KVM-79

2009-01-03 Thread Alexander Atticus
Hello!

I have been experimenting with KVM and have been experiencing poor write I/O
performance.  I'm not sure whether I'm doing something wrong or if this is
just the current state of things.

While writing to the local array on the node running the guests I get about
200MB/s from dd (bs=1M count=1000) or about 90MB/s write performance from
iozone (sequencial) when I write to a 2G file with a 16M record length.  The
node is an 8 disk system using 3ware in a RAID50 configuration.  It has 8GB
of RAM.

The guests get much slower disk access. The guests are using file based
backends (tried both qcow2 and raw) with virtio support.  With no other
activity on the machine, I get about 6 to 7MB/s write performance from
iozone with the same test. Guests are running Debian lenny/sid with
2.6.26-1-686.

I don't know whether this is because of context switching or what.  Again,
I'm wondering how I can improve this performance or if there is something
I am doing wrong.  As a side note, I have also noticed some weirdness with
qcow2 files; some windows installations freeze and some disk corruption
running iozone on Linux guests.  All problems go away when I switch to raw
image files though.

I realize I take a hit by running file-based backends, and that the tests
aren't altogether accurate because with 8GB of RAM, I'm not saturating the
cache but they are still very disparate in numbers which concerns me.

Finally, does anyone know if KVM is now fully supporting SCSI pass-through
in KVM-79? Does this mean that I would vastly reduce context switching by
using an LVM backend device for guests or am I misunderstanding the benefits
of pass-through?

This is the iozone command:

# iozone -a -r 16M -g 2g -n 16g -m 2000

KVM command to launch guest:

# /usr/bin/kvm -S -M pc -m 1024 -smp 1 -name demo4 \
-uuid 5b474147-f581-9a21-ac7d-cdd0ce881c5c -monitor pty -boot c \
-drive file=/iso/debian-testing-i386-netinst.iso,if=ide,media=cdrom,index=2 \
-drive file=/srv/demo/demo4.img,if=virtio,index=0,boot=on -net \
nic,macaddr=00:16:16:64:e6:de,vlan=0 -net \
tap,fd=15,script=,vlan=0,ifname=vnet1 -serial pty -parallel none \
-usb -vnc 0.0.0.0:1 -k en-us

KVM/QEMU versions:

ii  kvm  79+dfsg-3
ii  qemu 0.9.1-8
Node Kernel:

2.6.26-1-amd64

Thanks,

-Alexander
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html