The user running the compute server is a member of libvirt group. I tried the migration via virsh, however that too returns the same result. I get similar logs in the qemu log file.
Thanks & Regards, Unmesh Gurjar | Lead Engineer | Vertex Software Private Ltd. | w. +91.20.6604.1500 x 379 | m. +91.982.324.7631 | unmesh.gur...@nttdata.com<mailto:unmesh.gur...@nttdata.com> | Follow us on Twitter@NTTDATAAmericas From: Razique Mahroua [mailto:razique.mahr...@gmail.com] Sent: Friday, April 13, 2012 2:22 PM To: Gurjar, Unmesh Cc: openstack@lists.launchpad.net Subject: Re: [Openstack] Issue in KVM block migration Hi, sorry for the late reply. Does the user nova belong to the libvirt group ? Can you try manually the migration via virsh ? $ virsh --migrate --live --copy-storage-all $domain qemu+tcp://user@server/sytem thanks Nuage & Co - Razique Mahroua razique.mahr...@gmail.com<mailto:razique.mahr...@gmail.com> [cid:image001.jpg@01CD199E.79A758F0] Le 11 avr. 2012 à 16:44, Gurjar, Unmesh a écrit : Thanks Razique for taking up this one. Libvirt version on both Compute hosts: $ libvirtd --version libvirtd (libvirt) 0.9.2 $ virsh --version 0.9.2 Here are my libvirtd.conf details: listen_tls = 0 listen_tcp = 1 unix_sock_group = "libvirtd" unix_sock_rw_perms = "0770" auth_unix_ro = "none" auth_unix_rw = "none" auth_tcp = "none" Thanks & Regards, Unmesh Gurjar | Lead Engineer | Vertex Software Private Ltd. | w. +91.20.6604.1500 x 379 | m. +91.982.324.7631 | unmesh.gur...@nttdata.com<mailto:unmesh.gur...@nttdata.com> | Follow us on Twitter@NTTDATAAmericas From: Razique Mahroua [mailto:razique.mahr...@gmail.com]<mailto:[mailto:razique.mahr...@gmail.com]> Sent: Wednesday, April 11, 2012 7:33 PM To: Gurjar, Unmesh Cc: openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net> Subject: Re: [Openstack] Issue in KVM block migration Hi, it looks like the user under which libvirt is running doesn't have the rights to manage server's IF. What version of libvirt are you using ? Can I see the file /etc/libvirt/libvirtd.com ? (cat /etc/libvirt/libvirtd.conf | grep -v -e "#" -e "^$ Raz Nuage & Co - Razique Mahroua razique.mahr...@gmail.com<mailto:razique.mahr...@gmail.com> <image001.jpg> Le 11 avr. 2012 à 14:48, Gurjar, Unmesh a écrit : Hi, I have setup two Compute nodes (using Openstack master branch) and configured libvirt for block migration (by following step #1 and #4 mentioned here<http://docs.openstack.org/diablo/openstack-compute/admin/content/configuring-live-migrations.html>). In addition, I have also disabled apparmor for libvirtd profile and have an entry in '/etc/hosts' of both the Compute hosts. >From both the Compute hosts, I am able to connect and fetch the list of >running instances on the other host (using the hostname), as follows: virsh # connect qemu+tcp://ubuntu-dev-001/system virsh # list The issue is block migrating an instance between these hosts fails with the following error in the source Compute host console: libvir: QEMU error : operation failed: migration job: unexpectedly failed 2012-04-11 03:15:54 DEBUG nova.rpc.amqp [-] Making asynchronous call on network ... from (pid=18487) multicall /opt/stack/nova/nova/rpc/amqp.py:318 2012-04-11 03:15:54 DEBUG nova.rpc.amqp [-] MSG_ID is 8d91158236ae4de0bd8b89533f060892 from (pid=18487) multicall /opt/stack/nova/nova/rpc/amqp.py:321 2012-04-11 03:15:54 DEBUG nova.rpc.amqp [-] Making asynchronous cast on compute.ubuntu-dev-001... from (pid=18487) cast /opt/stack/nova/nova/rpc/amqp.py:343 Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/eventlet/hubs/poll.py", line 97, in wait readers.get(fileno, noop).cb(fileno) File "/usr/lib/python2.7/dist-packages/eventlet/greenthread.py", line 192, in main result = function(*args, **kwargs) File "/opt/stack/nova/nova/virt/libvirt/connection.py", line 2179, in _live_migration recover_method(ctxt, instance_ref, dest, block_migration) File "/usr/lib/python2.7/contextlib.py", line 24, in __exit__ self.gen.next() File "/opt/stack/nova/nova/virt/libvirt/connection.py", line 2175, in _live_migration FLAGS.live_migration_bandwidth) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 689, in migrateToURI if ret == -1: raise libvirtError ('virDomainMigrateToURI() failed', dom=self) libvirtError: operation failed: migration job: unexpectedly failed Removing descriptor: 12 I find the following in the qemu log file (/var/log/libvirt/qemu/instance-00000003.log ) on the destination Compute host: 2012-04-11 04:14:25.971: starting up LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.14 -enable-kvm -m 512 -smp 1,sockets=1,cores=1,threads=1 -name instance-00000003 -uuid f58d5f32-6d55-43fb-89ed-c33ebf72d1ed -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/instance-00000003.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot c -kernel /opt/stack/nova/instances/instance-00000003/kernel -initrd /opt/stack/nova/instances/instance-00000003/ramdisk -append root=/dev/vda console=ttyS0 -drive file=/opt/stack/nova/instances/instance-00000003/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -netdev tap,ifname=tapfcfa2a6c-35,script=,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=fa:16:3e:7d:32:27,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/opt/stack/nova/instances/instance-00000003/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -usb -device usb-tablet,id=input0 -vnc 127.0.0.1:0 -k en-us -vga cirrus -incoming tcp:0.0.0.0:49166 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 Domain id=15 is tainted: high-privileges Domain id=15 is tainted: shell-scripts char device redirected to /dev/pts/17 2012-04-11 04:14:26.406: shutting down can't delete tapfcfa2a6c-35 from eth1: Operation not supported SIOCSIFADDR: Permission denied SIOCSIFFLAGS: Permission denied SIOCSIFFLAGS: Permission denied /etc/qemu-ifdown: could not launch network script Would be great if someone can point anything that I am missing here or any configuration changes required to resolve this issue. Thanks & Regards, Unmesh Gurjar | Lead Engineer | Vertex Software Private Ltd. | w. +91.20.6604.1500 x 379 | m. +91.982.324.7631 | unmesh.gur...@nttdata.com<mailto:unmesh.gur...@nttdata.com> | Follow us on Twitter@NTTDATAAmericas ______________________________________________________________________ Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding _______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net> Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp ______________________________________________________________________ Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding ______________________________________________________________________ Disclaimer:This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding
<<inline: image001.jpg>>
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp