5x slower guest disk performance with virtio disk

2011-12-15 Thread Brian J. Murrell
I have a CentOS 6 host system running a CentOS 6 KVM guest and the
guest seems to get about 5x slower disk throughput than the host:

host $ sudo dd of=/dev/datavol/disk1 if=/dev/zero bs=1M oflag=direct count=5
5+0 records in
5+0 records out
5242880 bytes (52 GB) copied, 150.36 s, 349 MB/s
host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=5
5+0 records in
5+0 records out
5242880 bytes (52 GB) copied, 731.007 s, 71.7 MB/s

This is actually a vast improvement over earlier tests where it was a
factor of 40x difference.  But even still, I thought virtio disks were
supposed to be much closer to native host speed than 5x.

/dev/datavol/disk1 is a RAIDed LV in an LVM volume group consisting of 5 disks:

$ sudo pvs
  PV VG  Fmt  Attr PSize   PFree  
  /dev/sda3  lvm2 a-   420.16g 420.16g
  /dev/sdb   datavol lvm2 a-   465.76g 227.66g
  /dev/sdc   datavol lvm2 a-   465.76g 245.76g
  /dev/sdd   datavol lvm2 a-   465.76g 245.76g
  /dev/sde   datavol lvm2 a-   465.76g 245.76g
  /dev/sdf   datavol lvm2 a-   465.76g 245.76g
$ sudo lvdisplay --maps /dev/datavol/disk1
  --- Logical volume ---
  LV Name/dev/datavol/disk1
  VG Namedatavol
  LV UUID3gD1N3-ybAU-GzUO-8QBV-b6op-lsK9-GMNm3w
  LV Write Accessread/write
  LV Status  available
  # open 1
  LV Size500.00 GiB
  Current LE 128000
  Segments   1
  Allocation inherit
  Read ahead sectors auto
  - currently set to 256
  Block device   253:1
   
  --- Segments ---
  Logical extent 0 to 127999:
Typestriped
Stripes 5
Stripe size 4.00 KiB
Stripe 0:
  Physical volume   /dev/sdb
  Physical extents  5120 to 30719
Stripe 1:
  Physical volume   /dev/sdc
  Physical extents  5120 to 30719
Stripe 2:
  Physical volume   /dev/sdd
  Physical extents  5120 to 30719
Stripe 3:
  Physical volume   /dev/sde
  Physical extents  5120 to 30719
Stripe 4:
  Physical volume   /dev/sdf
  Physical extents  5120 to 30719

The KVM command line is as follows:

$ tr '\0' '\n'  /proc/9409/cmdline 
/usr/libexec/qemu-kvm
-S
-M
rhel6.0.0
-enable-kvm
-m
1024
-smp
1,sockets=1,cores=1,threads=1
-name
guest
-uuid
e2bf97e0-cfb7-444c-abc3-9efe262d8efe
-nodefconfig
-nodefaults
-chardev
socket,id=monitor,path=/var/lib/libvirt/qemu/guest.monitor,server,nowait
-mon
chardev=monitor,mode=control
-rtc
base=utc
-boot
c
-drive
file=/var/lib/libvirt/images/cdrom.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
-device
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-drive
file=/var/lib/libvirt/images/guest.qcow2,if=none,id=drive-virtio-disk0,boot=on,format=qcow2
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive
file=/dev/datavol/disk1,if=none,id=drive-virtio-disk1,format=raw
-device
virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk1,id=virtio-disk1
-netdev
tap,fd=23,id=hostnet0
-device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:ec:8d:47,bus=pci.0,addr=0x3
-chardev
pty,id=serial0
-device
isa-serial,chardev=serial0
-usb
-vnc
127.0.0.1:2
-vga
cirrus
-device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5

That is started from virt-manager given the following configuration:

domain type='kvm' id='18'
  nameguest/name
  uuide2bf97e0-cfb7-444c-abc3-9efe262d8efe/uuid
  memory1048576/memory
  currentMemory1048576/currentMemory
  vcpu1/vcpu
  os
type arch='x86_64' machine='rhel6.0.0'hvm/type
boot dev='hd'/
  /os
  features
acpi/
apic/
pae/
  /features
  clock offset='utc'/
  on_poweroffdestroy/on_poweroff
  on_rebootrestart/on_reboot
  on_crashrestart/on_crash
  devices
emulator/usr/libexec/qemu-kvm/emulator
disk type='file' device='cdrom'
  driver name='qemu' type='raw'/
  source file='/var/lib/libvirt/images/cdrom.iso'/
  target dev='hdc' bus='ide'/
  readonly/
  alias name='ide0-1-0'/
  address type='drive' controller='0' bus='1' unit='0'/
/disk
disk type='file' device='disk'
  driver name='qemu' type='qcow2'/
  source file='/var/lib/libvirt/images/guest.qcow2'/
  target dev='vda' bus='virtio'/
  alias name='virtio-disk0'/
  address type='pci' domain='0x' bus='0x00' slot='0x04' 
function='0x0'/
/disk
disk type='block' device='disk'
  driver name='qemu' type='raw'/
  source dev='/dev/datavol/disk1'/
  target dev='vdb' bus='virtio'/
  alias name='virtio-disk1'/
  address type='pci' domain='0x' bus='0x00' slot='0x06' 
function='0x0'/
/disk
controller type='ide' index='0'
  alias name='ide0'/
  address type='pci' domain='0x' bus='0x00' slot='0x01' 
function='0x1'/
/controller
interface type='network'
  mac address='52:54:00:ec:8d:47'/
  source network='default'/
  

Re: 5x slower guest disk performance with virtio disk

2011-12-15 Thread Stefan Pietsch
* Brian J. Murrell br...@interlinx.bc.ca [2011-12-15 15:28]:
 I have a CentOS 6 host system running a CentOS 6 KVM guest and the
 guest seems to get about 5x slower disk throughput than the host:
 
 host $ sudo dd of=/dev/datavol/disk1 if=/dev/zero bs=1M oflag=direct 
 count=5
 5+0 records in
 5+0 records out
 5242880 bytes (52 GB) copied, 150.36 s, 349 MB/s
 host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=5
 5+0 records in
 5+0 records out
 5242880 bytes (52 GB) copied, 731.007 s, 71.7 MB/s
 
 This is actually a vast improvement over earlier tests where it was a
 factor of 40x difference.  But even still, I thought virtio disks were
 supposed to be much closer to native host speed than 5x.

--- snip ---


Did you try to set the cache of the virtio disk to none?


Regards,
Stefan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 5x slower guest disk performance with virtio disk

2011-12-15 Thread Brian J. Murrell
On 11-12-15 10:47 AM, Stefan Pietsch wrote: 
 
 Did you try to set the cache of the virtio disk to none?

I didn't.  It was set at default in virt-manager and I suppose I just
assumed that default would be reasonable.

Changing to none has had a good effect indeed:

host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=1
1+0 records in
1+0 records out
1048576 bytes (10 GB) copied, 49.9241 s, 210 MB/s
host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=1
1+0 records in
1+0 records out
1048576 bytes (10 GB) copied, 54.7737 s, 191 MB/s
host $ ssh guest dd of=/dev/vdb if=/dev/zero bs=1M oflag=direct count=1
1+0 records in
1+0 records out
1048576 bytes (10 GB) copied, 50.9851 s, 206 MB/s
host $ sudo dd of=/dev/datavol/disk1 if=/dev/zero bs=1M oflag=direct count=1
1+0 records in
1+0 records out
1048576 bytes (10 GB) copied, 31.0545 s, 338 MB/s

So, about 2/3 of host speed now -- which is much better.  Is 2/3 about
normal or should I be looking for more?

Thanks much!

b.



signature.asc
Description: OpenPGP digital signature


Re: 5x slower guest disk performance with virtio disk

2011-12-15 Thread Sasha Levin
On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:
 So, about 2/3 of host speed now -- which is much better.  Is 2/3 about
 normal or should I be looking for more? 

aio=native

Thats the qemu setting, I'm not sure where libvirt hides that.

-- 

Sasha.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 5x slower guest disk performance with virtio disk

2011-12-15 Thread Daniel P. Berrange
On Thu, Dec 15, 2011 at 07:16:22PM +0200, Sasha Levin wrote:
 On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:
  So, about 2/3 of host speed now -- which is much better.  Is 2/3 about
  normal or should I be looking for more? 
 
 aio=native
 
 Thats the qemu setting, I'm not sure where libvirt hides that.

  disk  ...
driver io='threads|native'/
...
  /disk

Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 5x slower guest disk performance with virtio disk

2011-12-15 Thread Brian J. Murrell
On 11-12-15 12:27 PM, Daniel P. Berrange wrote:
 On Thu, Dec 15, 2011 at 07:16:22PM +0200, Sasha Levin wrote:
 On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:
 So, about 2/3 of host speed now -- which is much better.  Is 2/3 about
 normal or should I be looking for more? 

 aio=native

 Thats the qemu setting, I'm not sure where libvirt hides that.
 
   disk  ...
 driver io='threads|native'/
 ...
   /disk

When I try to virsh edit and add that driver io=.../ it seems to
get stripped out of the config (as observed with virsh dumpxml).
Doing some googling I discovered an alternate syntax (in message
https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html):

driver name='qemu' type='raw' cache='none' io='native'/

But the io='native' seems to get stripped out of that too.

FWIW, I have:

qemu-kvm-0.12.1.2-2.113.el6.x86_64
libvirt-client-0.8.1-27.el6.x86_64

installed here on CentOS 6.0.  Maybe this aio= is not supported in the
above package(s)?

Cheers,
b.



signature.asc
Description: OpenPGP digital signature


Re: 5x slower guest disk performance with virtio disk

2011-12-15 Thread Simon Wilson

- Message from Brian J. Murrell br...@interlinx.bc.ca -
   Date: Thu, 15 Dec 2011 14:43:23 -0500
   From: Brian J. Murrell br...@interlinx.bc.ca
Subject: Re: 5x slower guest disk performance with virtio disk
 To: kvm@vger.kernel.org
 Cc: Sasha Levin levinsasha...@gmail.com



On 11-12-15 12:27 PM, Daniel P. Berrange wrote:

On Thu, Dec 15, 2011 at 07:16:22PM +0200, Sasha Levin wrote:

On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:

So, about 2/3 of host speed now -- which is much better.  Is 2/3 about
normal or should I be looking for more?


aio=native

Thats the qemu setting, I'm not sure where libvirt hides that.


  disk  ...
driver io='threads|native'/
...
  /disk


When I try to virsh edit and add that driver io=.../ it seems to
get stripped out of the config (as observed with virsh dumpxml).
Doing some googling I discovered an alternate syntax (in message
https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html):

driver name='qemu' type='raw' cache='none' io='native'/

But the io='native' seems to get stripped out of that too.

FWIW, I have:

qemu-kvm-0.12.1.2-2.113.el6.x86_64
libvirt-client-0.8.1-27.el6.x86_64

installed here on CentOS 6.0.  Maybe this aio= is not supported in the
above package(s)?



You need to update CentOS, your version of libvirt does not support  
io=native I don't believe. You are running into similar issues that I  
have posted about (mostly unsuccessfully in terms of getting much  
support LOL) here -  
https://www.centos.org/modules/newbb/viewtopic.php?topic_id=33708forum=55


From that thread:

With libvirt 0.8.7-6, qemu-kvm 0.12.1.2-2.144, and kernel 2.6.32-115,  
you can use the io=native parameter in the KVM xml files. Bugzilla  
591703 has the details, but basically my img file references in the VM  
xml now reads like this:


disk type='file' device='disk'
  driver name='qemu' type='raw' cache='none' io='native'/
  source file='/vm/pool/server06.img'/
  target dev='vda' bus='virtio'/
  address type='pci' domain='0x' bus='0x00' slot='0x04'  
function='0x0'/

/disk

You may also want to search this list for my thread from November with  
the title Improving RAID5 write performance in a KVM VM


Cheers
Simon.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html