Re: [ovirt-users] Details about why a live migration failed

2017-03-30 Thread Davide Ferrari
It fails even when trying to migrate to a CentOS 7.2 host :/
I guess I'll have to shut it down then :(

2017-03-30 9:55 GMT+02:00 Davide Ferrari :

> Hello again
>
> Yep, it fails to migrate to any 7.3 host.
> I still have to try to migrate to one of the old 7.2 hosts, I'll let you
> know
>
> 2017-03-27 13:51 GMT+02:00 Francesco Romani :
>
>> On 03/24/2017 04:01 PM, Davide Ferrari wrote:
>> > Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1
>> > Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1
>> >
>> > To be fair I'm trying to migrate away that VM so I can install updates
>> > on the source host.
>>
>> Another, hopefully less likely, case is a QEMu bug. If QEMU cannot
>> correctly restore the VM state from the migration
>> data stream, you'll get the same error.
>> Anyway, this definitely looks like libvirt/qemu issue; better be
>> prepared to file a libvirt bug.
>> does it happen migrating to *any* CentOS7.3 box?
>>
>> Bests,
>>
>> --
>> Francesco Romani
>> Senior SW Eng., Virtualization R&D
>> Red Hat
>> IRC: fromani github: @fromanirh
>>
>>
>
>
> --
> Davide Ferrari
> Senior Systems Engineer
>



-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-30 Thread Davide Ferrari
Hello again

Yep, it fails to migrate to any 7.3 host.
I still have to try to migrate to one of the old 7.2 hosts, I'll let you
know

2017-03-27 13:51 GMT+02:00 Francesco Romani :

> On 03/24/2017 04:01 PM, Davide Ferrari wrote:
> > Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1
> > Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1
> >
> > To be fair I'm trying to migrate away that VM so I can install updates
> > on the source host.
>
> Another, hopefully less likely, case is a QEMu bug. If QEMU cannot
> correctly restore the VM state from the migration
> data stream, you'll get the same error.
> Anyway, this definitely looks like libvirt/qemu issue; better be
> prepared to file a libvirt bug.
> does it happen migrating to *any* CentOS7.3 box?
>
> Bests,
>
> --
> Francesco Romani
> Senior SW Eng., Virtualization R&D
> Red Hat
> IRC: fromani github: @fromanirh
>
>


-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-27 Thread Francesco Romani
On 03/24/2017 04:01 PM, Davide Ferrari wrote:
> Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1
> Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1
>
> To be fair I'm trying to migrate away that VM so I can install updates
> on the source host.

Another, hopefully less likely, case is a QEMu bug. If QEMU cannot
correctly restore the VM state from the migration
data stream, you'll get the same error.
Anyway, this definitely looks like libvirt/qemu issue; better be
prepared to file a libvirt bug.
does it happen migrating to *any* CentOS7.3 box?

Bests,

-- 
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
Source: CentOS 7.2 - qemu-kvm-ev-2.3.0-31.el7.16.1
Dest: CentOS 7.3 - qemu-kvm-ev-2.6.0-28.el7_3.3.1

To be fair I'm trying to migrate away that VM so I can install updates on
the source host.


2017-03-24 15:18 GMT+01:00 Michal Skrivanek :

>
> On 24 Mar 2017, at 15:15, Davide Ferrari  wrote:
>
> Mmmh this is all I got from libvirt log on receiver host:
>
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name 
> guest=druid-co01,debug-threads=on
> -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/
> qemu/domain-5-druid-co01/master-key.aes -machine
> pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
> size=16777216k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
> 4,maxcpus=64,sockets=16,cores=4,threads=1 -numa
> node,nodeid=0,cpus=0-3,mem=16384 -uuid 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8
> -smbios 'type=1,manufacturer=oVirt,product=oVirt
> Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-
> 0037-4C10-8031-B7C04F564232,uuid=4f627cc1-9b52-4eef-bf3a-c02e8a6303b8'
> -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/
> var/lib/libvirt/qemu/domain-5-druid-co01.billydoma/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc
> base=2017-03-24T10:38:03,driftfix=slew -global 
> kvm-pit.lost_tick_policy=discard
> -no-hpet -no-shutdown -boot strict=on -device 
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
> -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4
> -drive if=none,id=drive-ide0-1-0,readonly=on -device
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-
> 424d-aecc-eb652c0169d9/images/08b19faa-4b1f-4da8-87a2-
> 2af0700a7906/bdb18a7d-1558-41f9-aa3a-e63407c7881e,format=
> qcow2,if=none,id=drive-virtio-disk0,serial=08b19faa-4b1f-
> 4da8-87a2-2af0700a7906,cache=none,werror=stop,rerror=stop,aio=threads
> -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-
> virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/rhev/data-center/
> 0001-0001-0001-0001-03e3/ba2bd397-9222-
> 424d-aecc-eb652c0169d9/images/987d84da-188b-45c0-99d0-
> 3dde29ddcb6e/51a1b9ee-b0ae-4208-9806-a319d34db06e,format=
> qcow2,if=none,id=drive-virtio-disk1,serial=987d84da-188b-
> 45c0-99d0-3dde29ddcb6e,cache=none,werror=stop,rerror=stop,aio=threads
> -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
> -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:93,bus=pci.0,addr=0x3
> -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/
> 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.com.redhat.rhevm.vdsm,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=
> charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/
> 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.org.qemu.guest_agent.0,server,nowait
> -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=
> charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev
> spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-
> serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
> -spice tls-port=5904,addr=192.168.10.107,x509-dir=/etc/pki/vdsm/
> libvirt-spice,tls-channel=default,tls-channel=main,tls-
> channel=display,tls-channel=inputs,tls-channel=cursor,tls-
> channel=playback,tls-channel=record,tls-channel=smartcard,
> tls-channel=usbredir,seamless-migration=on -device
> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,
> vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2 -incoming defer -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg timestamp=on
> Domain id=5 is tainted: hook-script
> 2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any
> NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
> 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
> 52 53 54 55 56 57 58 59 60 61 62 63
> 2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus
> should be described in NUMA config
> 2017-03-24 10:41:20.982+: shutting down
> 2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed:
> Input/output error
>
> Donating host doesn't say a thing about this VM.
> There's an "input/output error" but I can't see to what is related…
>
>
> most likely to the migration stream, either the TCP connection was cut
> short or internal bug
> What are the version of qemu on both ends? host OS?
>
> Thanks,
> michal
>
>
>
>
>
>
> 2017-03-24 13:38 GMT+01:00 Francesco Romani :
>
>>
>> On 03/24/2017 11:58 AM, Davide Ferrari wrote:
>> > And this is the vdsm log from vmhost04:
>> >
>> > Thread-6320717::INFO::2017-03-24
>> > 11:41:13,019::migration::712::virt.vm::(mon

Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Michal Skrivanek

> On 24 Mar 2017, at 15:15, Davide Ferrari  wrote:
> 
> Mmmh this is all I got from libvirt log on receiver host:
> 
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
> QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name 
> guest=druid-co01,debug-threads=on -S -object 
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-druid-co01/master-key.aes
>  -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m 
> size=16777216k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 
> 4,maxcpus=64,sockets=16,cores=4,threads=1 -numa 
> node,nodeid=0,cpus=0-3,mem=16384 -uuid 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8 
> -smbios 'type=1,manufacturer=oVirt,product=oVirt 
> Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-0037-4C10-8031-B7C04F564232,uuid=4f627cc1-9b52-4eef-bf3a-c02e8a6303b8'
>  -no-user-config -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-druid-co01.billydoma/monitor.sock,server,nowait
>  -mon chardev=charmonitor,id=monitor,mode=control -rtc 
> base=2017-03-24T10:38:03,driftfix=slew -global 
> kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on 
> -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
> virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device 
> virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive 
> if=none,id=drive-ide0-1-0,readonly=on -device 
> ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
> file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/08b19faa-4b1f-4da8-87a2-2af0700a7906/bdb18a7d-1558-41f9-aa3a-e63407c7881e,format=qcow2,if=none,id=drive-virtio-disk0,serial=08b19faa-4b1f-4da8-87a2-2af0700a7906,cache=none,werror=stop,rerror=stop,aio=threads
>  -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>  -drive 
> file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/987d84da-188b-45c0-99d0-3dde29ddcb6e/51a1b9ee-b0ae-4208-9806-a319d34db06e,format=qcow2,if=none,id=drive-virtio-disk1,serial=987d84da-188b-45c0-99d0-3dde29ddcb6e,cache=none,werror=stop,rerror=stop,aio=threads
>  -device 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
>  -netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device 
> virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:93,bus=pci.0,addr=0x3
>  -chardev 
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.com.redhat.rhevm.vdsm,server,nowait
>  -device 
> virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
>  -chardev 
> socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.org.qemu.guest_agent.0,server,nowait
>  -device 
> virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
>  -chardev spicevmc,id=charchannel2,name=vdagent -device 
> virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
>  -spice 
> tls-port=5904,addr=192.168.10.107,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
>  -device 
> qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
>  -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 
> -msg timestamp=on
> Domain id=5 is tainted: hook-script
> 2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any NUMA 
> nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 
> 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 
> 55 56 57 58 59 60 61 62 63
> 2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus 
> should be described in NUMA config
> 2017-03-24 10:41:20.982+: shutting down
> 2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed: Input/output 
> error
> 
> Donating host doesn't say a thing about this VM.
> There's an "input/output error" but I can't see to what is related…

most likely to the migration stream, either the TCP connection was cut short or 
internal bug
What are the version of qemu on both ends? host OS?

Thanks,
michal

> 
> 
> 
> 
> 
> 2017-03-24 13:38 GMT+01:00 Francesco Romani  >:
> 
> On 03/24/2017 11:58 AM, Davide Ferrari wrote:
> > And this is the vdsm log from vmhost04:
> >
> > Thread-6320717::INFO::2017-03-24
> > 11:41:13,019::migration::712::virt.vm::(monitor_migration)
> > vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 190
> > seconds elapsed, 98% of data processed, total data: 16456MB, processed
> > data: 9842MB, remaining data: 386MB, transfer speed 52MBps, z

Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
Mmmh this is all I got from libvirt log on receiver host:

LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name
guest=druid-co01,debug-threads=on -S -object
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-5-druid-co01/master-key.aes
-machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off -cpu Haswell-noTSX -m
size=16777216k,slots=16,maxmem=4294967296k -realtime mlock=off -smp
4,maxcpus=64,sockets=16,cores=4,threads=1 -numa
node,nodeid=0,cpus=0-3,mem=16384 -uuid 4f627cc1-9b52-4eef-bf3a-c02e8a6303b8
-smbios 'type=1,manufacturer=oVirt,product=oVirt
Node,version=7-2.1511.el7.centos.2.10,serial=4C4C4544-0037-4C10-8031-B7C04F564232,uuid=4f627cc1-9b52-4eef-bf3a-c02e8a6303b8'
-no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-5-druid-co01.billydoma/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=2017-03-24T10:38:03,driftfix=slew -global
kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x7 -device
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive
if=none,id=drive-ide0-1-0,readonly=on -device
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/08b19faa-4b1f-4da8-87a2-2af0700a7906/bdb18a7d-1558-41f9-aa3a-e63407c7881e,format=qcow2,if=none,id=drive-virtio-disk0,serial=08b19faa-4b1f-4da8-87a2-2af0700a7906,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/rhev/data-center/0001-0001-0001-0001-03e3/ba2bd397-9222-424d-aecc-eb652c0169d9/images/987d84da-188b-45c0-99d0-3dde29ddcb6e/51a1b9ee-b0ae-4208-9806-a319d34db06e,format=qcow2,if=none,id=drive-virtio-disk1,serial=987d84da-188b-45c0-99d0-3dde29ddcb6e,cache=none,werror=stop,rerror=stop,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x8,drive=drive-virtio-disk1,id=virtio-disk1
-netdev tap,fd=34,id=hostnet0,vhost=on,vhostfd=35 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:93,bus=pci.0,addr=0x3
-chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.com.redhat.rhevm.vdsm,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
-chardev
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4f627cc1-9b52-4eef-bf3a-c02e8a6303b8.org.qemu.guest_agent.0,server,nowait
-device
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
-chardev spicevmc,id=charchannel2,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
-spice
tls-port=5904,addr=192.168.10.107,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
-device
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,bus=pci.0,addr=0x2
-incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
-msg timestamp=on
Domain id=5 is tainted: hook-script
2017-03-24T10:38:03.414396Z qemu-kvm: warning: CPU(s) not present in any
NUMA nodes: 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51
52 53 54 55 56 57 58 59 60 61 62 63
2017-03-24T10:38:03.414497Z qemu-kvm: warning: All CPU(s) up to maxcpus
should be described in NUMA config
2017-03-24 10:41:20.982+: shutting down
2017-03-24T10:41:20.986633Z qemu-kvm: load of migration failed:
Input/output error

Donating host doesn't say a thing about this VM.
There's an "input/output error" but I can't see to what is related...





2017-03-24 13:38 GMT+01:00 Francesco Romani :

>
> On 03/24/2017 11:58 AM, Davide Ferrari wrote:
> > And this is the vdsm log from vmhost04:
> >
> > Thread-6320717::INFO::2017-03-24
> > 11:41:13,019::migration::712::virt.vm::(monitor_migration)
> > vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 190
> > seconds elapsed, 98% of data processed, total data: 16456MB, processed
> > data: 9842MB, remaining data: 386MB, transfer speed 52MBps, zero
> > pages: 1718676MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
> > libvirtEventLoop::DEBUG::2017-03-24
> > 11:41:21,007::vm::4291::virt.vm::(onLibvirtLifecycleEvent)
> > vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Suspended detail 0
> > opaque None
> > libvirtEventLoop::INFO::2017-03-24
> > 11:41:21,025::vm::4815::virt.vm::(_logGuestCpuStatus)
> > vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::CPU stop

Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Francesco Romani

On 03/24/2017 11:58 AM, Davide Ferrari wrote:
> And this is the vdsm log from vmhost04:
>
> Thread-6320717::INFO::2017-03-24
> 11:41:13,019::migration::712::virt.vm::(monitor_migration)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 190
> seconds elapsed, 98% of data processed, total data: 16456MB, processed
> data: 9842MB, remaining data: 386MB, transfer speed 52MBps, zero
> pages: 1718676MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
> libvirtEventLoop::DEBUG::2017-03-24
> 11:41:21,007::vm::4291::virt.vm::(onLibvirtLifecycleEvent)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Suspended detail 0
> opaque None
> libvirtEventLoop::INFO::2017-03-24
> 11:41:21,025::vm::4815::virt.vm::(_logGuestCpuStatus)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::CPU stopped: onSuspend
> libvirtEventLoop::DEBUG::2017-03-24
> 11:41:21,069::vm::4291::virt.vm::(onLibvirtLifecycleEvent)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::event Resumed detail 0
> opaque None
> libvirtEventLoop::INFO::2017-03-24
> 11:41:21,069::vm::4815::virt.vm::(_logGuestCpuStatus)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::CPU running: onResume
> Thread-6320715::DEBUG::2017-03-24
> 11:41:21,224::migration::715::virt.vm::(stop)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::stopping migration
> monitor thread
> Thread-6320715::ERROR::2017-03-24
> 11:41:21,225::migration::252::virt.vm::(_recover)
> vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::operation failed:
> migration job: unexpectedly failed

This is surprising (no pun intended)
With a pretty high chance this comes from libvirt, I'm afraid you need
to dig in the libvirt logs/journal entries to learn more.
Vdsm could unfortunately do better than what it is already doing :\

-- 
Francesco Romani
Senior SW Eng., Virtualization R&D
Red Hat
IRC: fromani github: @fromanirh

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
And this is the vdsm log from vmhost04:

Thread-6320715::DEBUG::2017-03-24
11:38:02,224::migration::188::virt.vm::(_setupVdsConnection)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Initiating connection with
destination
Thread-6320715::DEBUG::2017-03-24
11:38:02,239::migration::200::virt.vm::(_setupVdsConnection)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Destination server is:
vm08.ovirt.internal:54321
Thread-6320715::DEBUG::2017-03-24
11:38:02,241::migration::246::virt.vm::(_prepareGuest)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration started
Thread-6320715::DEBUG::2017-03-24
11:38:02,253::guestagent::502::virt.vm::(send_lifecycle_event)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::send_lifecycle_event
before_migration called
Thread-6320715::DEBUG::2017-03-24
11:38:02,319::migration::353::virt.vm::(run)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::migration semaphore acquired
after 2 seconds
Thread-6320715::INFO::2017-03-24
11:38:02,950::migration::407::virt.vm::(_startUnderlyingMigration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Creation of destination VM
took: 0 seconds
Thread-6320715::INFO::2017-03-24
11:38:02,951::migration::429::virt.vm::(_startUnderlyingMigration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::starting migration to
qemu+tls://vm08.ovirt.internal/system with miguri tcp://192.168.10.107
Thread-6320715::DEBUG::2017-03-24
11:38:02,951::migration::494::virt.vm::(_perform_with_conv_schedule)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::performing migration with conv
schedule
Thread-6320717::DEBUG::2017-03-24
11:38:02,952::migration::620::virt.vm::(run)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::starting migration monitor
thread
Thread-6320717::DEBUG::2017-03-24
11:38:02,952::migration::739::virt.vm::(_execute_action_with_params)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Setting downtime to 100
Thread-6320717::INFO::2017-03-24
11:38:12,959::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 10 seconds
elapsed, 8% of data processed, total data: 16456MB, processed data: 483MB,
remaining data: 15157MB, transfer speed 52MBps, zero pages: 209524MB,
compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:22,963::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 20 seconds
elapsed, 13% of data processed, total data: 16456MB, processed data:
1003MB, remaining data: 14469MB, transfer speed 51MBps, zero pages:
252844MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:32,966::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 30 seconds
elapsed, 16% of data processed, total data: 16456MB, processed data:
1524MB, remaining data: 13839MB, transfer speed 52MBps, zero pages:
281344MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:42,969::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 40 seconds
elapsed, 42% of data processed, total data: 16456MB, processed data:
2040MB, remaining data: 9685MB, transfer speed 52MBps, zero pages:
1214867MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:38:52,973::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 50 seconds
elapsed, 46% of data processed, total data: 16456MB, processed data:
2560MB, remaining data: 8944MB, transfer speed 52MBps, zero pages:
1271798MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:02,976::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 60 seconds
elapsed, 51% of data processed, total data: 16456MB, processed data:
3080MB, remaining data: 8148MB, transfer speed 52MBps, zero pages:
1342711MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:12,979::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 70 seconds
elapsed, 56% of data processed, total data: 16456MB, processed data:
3600MB, remaining data: 7353MB, transfer speed 52MBps, zero pages:
1413447MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:22,984::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 80 seconds
elapsed, 59% of data processed, total data: 16456MB, processed data:
4120MB, remaining data: 6755MB, transfer speed 52MBps, zero pages:
1433707MB, compressed: 0MB, dirty rate: -1, memory iteration: -1
Thread-6320717::INFO::2017-03-24
11:39:32,987::migration::712::virt.vm::(monitor_migration)
vmId=`4f627cc1-9b52-4eef-bf3a-c02e8a6303b8`::Migration Progress: 90 seconds
elapsed, 64% of data

[ovirt-users] Details about why a live migration failed

2017-03-24 Thread Davide Ferrari
Helo

I have a VM on a host that seems it cannot be live-migrated away. I'm
trying to migrate it to another (superior) cluster but please consider:
- VM isn't receiving too much tarffic and is not doing much at all
- I've already successfully live migrated other VMs from this host to the
same other cluster

looking through engine.log I cannot see anything interesting a aprt from
the generic ( I grepped for the job ID)

2017-03-24 10:30:52,186 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [20518310] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: druid-co01., Source:
vmhost04, Destination: vmhost06, User: admin@internal-authz).
2017-03-24 10:34:28,072 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-2) [6d6a3a53] Correlation ID: 20518310, Job
ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom Event
ID: -1, Message: Failed to migrate VM druid-co01. to Host vmhost06 . Trying
to migrate to another Host.
2017-03-24 10:34:28,676 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-2) [6d6a3a53] Correlation ID: 20518310, Job
ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom Event
ID: -1, Message: Migration started (VM: druid-co01., Source: vmhost04,
Destination: vmhost11, User: admin@internal-authz).
2017-03-24 10:37:59,097 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-40) [779e9773] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Failed to migrate VM druid-co01. to Host vmhost11 .
Trying to migrate to another Host.
2017-03-24 10:38:00,626 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-40) [779e9773] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration started (VM: druid-co01, Source: vmhost04,
Destination: vmhost08, User: admin@internal-authz).
2017-03-24 10:41:29,441 WARN
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [6f032492] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Failed to migrate VM druid-co01 to Host vmhost08 .
Trying to migrate to another Host.
2017-03-24 10:41:29,488 ERROR
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-42) [6f032492] Correlation ID: 20518310,
Job ID: 039f0694-3e05-4f93-993d-9e7383047873, Call Stack: null, Custom
Event ID: -1, Message: Migration failed  (VM: druid-co01, Source: vmhost04).

-- 
Davide Ferrari
Senior Systems Engineer
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users