Re: 5x slower guest disk performance with virtio disk

2011-12-15 Thread Simon Wilson

- Message from "Brian J. Murrell"  -
   Date: Thu, 15 Dec 2011 14:43:23 -0500
   From: "Brian J. Murrell" 
Subject: Re: 5x slower guest disk performance with virtio disk
 To: kvm@vger.kernel.org
 Cc: Sasha Levin 



On 11-12-15 12:27 PM, Daniel P. Berrange wrote:

On Thu, Dec 15, 2011 at 07:16:22PM +0200, Sasha Levin wrote:

On Thu, 2011-12-15 at 11:55 -0500, Brian J. Murrell wrote:

So, about 2/3 of host speed now -- which is much better.  Is 2/3 about
normal or should I be looking for more?


aio=native

Thats the qemu setting, I'm not sure where libvirt hides that.


  

...
  


When I try to "virsh edit" and add that "" it seems to
get stripped out of the config (as observed with "virsh dumpxml").
Doing some googling I discovered an alternate syntax (in message
https://www.redhat.com/archives/libvirt-users/2011-June/msg4.html):



But the "io='native'" seems to get stripped out of that too.

FWIW, I have:

qemu-kvm-0.12.1.2-2.113.el6.x86_64
libvirt-client-0.8.1-27.el6.x86_64

installed here on CentOS 6.0.  Maybe this aio= is not supported in the
above package(s)?



You need to update CentOS, your version of libvirt does not support  
io=native I don't believe. You are running into similar issues that I  
have posted about (mostly unsuccessfully in terms of getting much  
support LOL) here -  
https://www.centos.org/modules/newbb/viewtopic.php?topic_id=33708&forum=55


From that thread:

With libvirt 0.8.7-6, qemu-kvm 0.12.1.2-2.144, and kernel 2.6.32-115,  
you can use the "io=native" parameter in the KVM xml files. Bugzilla  
591703 has the details, but basically my img file references in the VM  
xml now reads like this:



  
  
  
  function='0x0'/>



You may also want to search this list for my thread from November with  
the title "Improving RAID5 write performance in a KVM VM"


Cheers
Simon.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Improving RAID5 write performance in a KVM VM

2011-11-28 Thread Simon Wilson

- Message from Simon Wilson  -
   Date: Fri, 25 Nov 2011 20:34:47 +1000
   From: Simon Wilson 
Subject: Improving RAID5 write performance in a KVM VM
 To: kvm@vger.kernel.org



Good afternoon list.

I have a CentOS 6 server running six KVM VMs, and am struggling with  
poor disk write performance in the VMs.


Setup summary:

Physical server has 2 x 500GB SATA drives in a RAID1 mirror, and 3 x  
2TB SATA drives in a RAID5 stripeset. The 2 x 500GB drives host /,  
/boot, and the VM .img files.


The .img files are mounted as per the following example in virsh xml:


  
  
  
  function='0x0'/>



Each virtual server also has a slice of the RAID5 array for data,  
including a CentOS 5.7 Samba server that has a 3TB piece.


The Samba drive is mounted in virsh xml as follows:


  
  
  
  function='0x0'/>



Mount parameters in the VM are:

  /dev/vdb1  /mnt/data  ext3  suid,dev,exec,noatime  0  0

When I first built the server performance in the VMs was terribly  
bad with constant CPU IOWAIT, but after a lot of searching the  
addition of io=native significantly improved it.


It is copying file data to the Samba share that is causing me  
issues. Some files I use are many GBs in size. Copies start at  
110MBps across the network, but once the Samba server's network copy  
cache fills up (the server has 4GB RAM allocated, and the cache  
seems to fill after about 2.5 to 3GB copied from the network) they  
hit a wall and crawl to 35MBps.


Allocating the Samba server more RAM tricks Windows into thinking  
the copy is finished sooner (although it still hits a brick wall  
once the larger cache is full), but of course the underlying data  
write continues for quite some time after Windows is done.


I have done a fair bit of analysis with dstat and using dd copies,  
and file writes onto the VM's RAID5 slice sit between 35 and 40MBps  
consistently.


Using Samba on the physical server (not my normal practice, but for  
troubleshooting) using a different partition on the same RAID5  
array, I get a consistent 75MBps overall write speed.


Reads from the partitions are comparable speed from both physical  
and virtual servers (~100MBps).


Are there any hints from anyone please about how I can improve the  
write performance in the VM? I realise it will likely never achieve  
the bare metal speed, but closer than 50% would be good...


Thanks
Simon.
--
Simon Wilson
M: 0400 12 11 16



Hi all - there's been a lot of activity on the list over the last few  
days since I posted this, but no responses to my query.


The Mailing List web page indicates that user support is welcomed in  
the list, but if this is not an appropriate channel for getting  
assistance could someone please point me in the right direction?


Thanks
Simon.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Improving RAID5 write performance in a KVM VM

2011-11-25 Thread Simon Wilson

Good afternoon list.

I have a CentOS 6 server running six KVM VMs, and am struggling with  
poor disk write performance in the VMs.


Setup summary:

Physical server has 2 x 500GB SATA drives in a RAID1 mirror, and 3 x  
2TB SATA drives in a RAID5 stripeset. The 2 x 500GB drives host /,  
/boot, and the VM .img files.


The .img files are mounted as per the following example in virsh xml:


  
  
  
  function='0x0'/>



Each virtual server also has a slice of the RAID5 array for data,  
including a CentOS 5.7 Samba server that has a 3TB piece.


The Samba drive is mounted in virsh xml as follows:


  
  
  
  function='0x0'/>



Mount parameters in the VM are:

  /dev/vdb1  /mnt/data  ext3  suid,dev,exec,noatime  0  0

When I first built the server performance in the VMs was terribly bad  
with constant CPU IOWAIT, but after a lot of searching the addition of  
io=native significantly improved it.


It is copying file data to the Samba share that is causing me issues.  
Some files I use are many GBs in size. Copies start at 110MBps across  
the network, but once the Samba server's network copy cache fills up  
(the server has 4GB RAM allocated, and the cache seems to fill after  
about 2.5 to 3GB copied from the network) they hit a wall and crawl to  
35MBps.


Allocating the Samba server more RAM tricks Windows into thinking the  
copy is finished sooner (although it still hits a brick wall once the  
larger cache is full), but of course the underlying data write  
continues for quite some time after Windows is done.


I have done a fair bit of analysis with dstat and using dd copies, and  
file writes onto the VM's RAID5 slice sit between 35 and 40MBps  
consistently.


Using Samba on the physical server (not my normal practice, but for  
troubleshooting) using a different partition on the same RAID5 array,  
I get a consistent 75MBps overall write speed.


Reads from the partitions are comparable speed from both physical and  
virtual servers (~100MBps).


Are there any hints from anyone please about how I can improve the  
write performance in the VM? I realise it will likely never achieve  
the bare metal speed, but closer than 50% would be good...


Thanks
Simon.
--
Simon Wilson
M: 0400 12 11 16

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Convert one .img disk created with sparse=true to no sparse

2011-10-20 Thread Simon Wilson

- Message from Stefan Hajnoczi  -
   Date: Thu, 20 Oct 2011 07:43:45 -0700
   From: Stefan Hajnoczi 
Subject: Re: Convert one .img disk created with sparse=true to no sparse
 To: Gonzalo Marcote Peña 
 Cc: kvm@vger.kernel.org



On Thu, Oct 20, 2011 at 4:21 AM, Gonzalo Marcote Peña
 wrote:

Hi.
I have one guest that I created with virt-install comand option  
'sparse=true'.

As i want to use now this guest for I/O tasks (DDBB) and i want to
improve performance, I want to convert the disk.img from sparse to no
sparse (obviusly without format it).
How can I do this task without loosinf the disk image data?.


If you are using a raw image file:
$ cp --sparse=never old-sparse.img new-allocated.img

This copies the data into the new file and does not try to make zero
regions sparse.  You can then replace the old file with the new file.

Stefan
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



- End message from Stefan Hajnoczi  -

If working with a NON-sparse VM img (i.e. originally created with  
virt-install --nonsparse), does a cp without --sparse=never retain the  
non-sparse nature of the file, or does it have to be specified?


Simon

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html