I just upgraded my two node virtual host cluster to RHEL6.3. I will put my 
question first and then explain what I'm doing and why. 

Does anyone know which file I need to edit to keep virt-manager from inspecting 
the cache type on a VM disk file when performing a live migration? There is a 
command line option for virsh to disable this(--unsafe) but there is no way to 
enable that in virt-manager directly. RHEL6.3 has enabled the cache check by 
default so now if a VM disk has cache=writeback it errors out. Here is the 
error.

Unable to migrate guest: Unsafe migration: Migration may lead to data 
corruption if disks use cache != none

Traceback (most recent call last):
  File "/usr/share/virt-manager/virtManager/asyncjob.py", line 44, in cb_wrapper
    callback(asyncjob, *args, **kwargs)
  File "/usr/share/virt-manager/virtManager/migrate.py", line 556, in 
_async_migrate
    vm.migrate(dstconn, migrate_uri, rate, live, secure, meter=meter)
  File "/usr/share/virt-manager/virtManager/domain.py", line 1141, in migrate
    self._backend.migrate(destconn.vmm, flags, newname, interface, rate)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 892, in migrate
    if ret is None:raise libvirtError('virDomainMigrate() failed', dom=self)
libvirtError: Unsafe migration: Migration may lead to data corruption if disks 
use cache != none

Since virt-manager just calls the python scripts it must be fairly easy to 
stick in the --unsafe option. However, I'm not that great with python and not 
sure if editing the python scripts directly is the best way to do it.

More info about my setup.

I'm running gluster3.3-1 on both of my virtual host machines. Both machines 
have a 1TB logical volume formatted with Ext4. This partition on each system is 
used as a brick for gluster in a replicated volume. I then mount the gluster 
volume from localhost on each VM host. This effectively gives me a network 
RAID1 to store VM's on. I'm using the native gluster FUSE driver to mount this 
volume on both systems. Before RHEL 6.3 FUSE did not support direct I/O so I 
had to use cache=writeback on my VM disks files. This gave me the best 
performance with most read/writes seeing 90+ MB/s over 1Gb Ethernet. Now with 
RHEL 6.3, direct I/O is supported in FUSE but the performance is 
horrible(Redhat even says so in the release notes..). I cannot switch my VM 
disk files to cache=none, at least not until they fix the performance issues. 
Now I have this issue with virt-manager not allowing me to do live migrations 
because this new default cache check implemented in libvirt. Here is the dev th!
 read with more info about the change.

https://www.redhat.com/archives/libvir-list/2012-February/msg00882.html

I can use virsh to migrate my VM's using the --unsafe option just fine. I would 
like to still be able to use virt-manager to migrate my VM's though. Maybe I 
should file a bug or open a ticket to try to get a tick box in virt-manager for 
this check..

I know this setup may sound crazy but, we all know Redhat is going to be 
pushing a gluster based solution to do this in the near future. I have not seen 
any VM disk corruption or any other bad things while using this setup. I have 
four RHEL6 VM's(20GB disks) and one openBSD VM(10GB disk). They have been 
running quite nicely with this setup for a couple of months now. For backups I 
use virsh to pause the VM's and create an LVM snapshot. Resume the VM's and use 
rsync to copy the VM's on the snapshot partition to another location. 

Thanks.

David.


_______________________________________________
rhelv6-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv6-list

Reply via email to