Hello,

sorry for my late answer.... I've been off for a long weekend. :-)

So source:
OS-Version:RHEL - 7 - 4.1708.el7.centos
Kernelversion:3.10.0 - 693.21.1.el7.x86_64
KVM-Version:2.9.0 - 16.el7_4.14.1
LIBVIRT-Version:libvirt-3.2.0-14.el7_4.9
VDSM-Version:vdsm-4.19.45-1.el7.centos
SPICE-Version:0.12.8 - 2.el7.1
CEPH-Version:librbd1-0.94.5-2.el7
Kernel Features:PTI: 1, IBPB: 0, IBRS: 0
Destination:
OS-Version:RHEL - 7 - 4.1708.el7.centos
Kernelversion:3.10.0 - 693.21.1.el7.x86_64
KVM-Version:2.9.0 - 16.el7_4.14.1
LIBVIRT-Version:libvirt-3.2.0-14.el7_4.9
VDSM-Version:vdsm-4.19.45-1.el7.centos
SPICE-Version:0.12.8 - 2.el7.1
CEPH-Version:librbd1-0.94.5-2.el7
Kernel Features:PTI: 1, IBPB: 0, IBRS: 0

qemu.log on destination:
2018-06-11T09:13:20.613605Z qemu-kvm: terminating on signal 15 from pid 3008 
(/usr/sbin/libvirtd)
2018-06-11 09:21:56.071+0000: starting up libvirt version: 3.2.0, package: 
14.el7_4.9 (CentOS BuildSystem <http://bugs.centos.org>, 2018-03-07-13:51:24, 
x86-01.bsys.centos.org), qemu version: 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.14.1), 
hostname: abc.yyyy.xxxxx.com
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=bbgas102,debug-threads=on 
-S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1932-abcas102/master-key.aes
 -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu 
Westmere,vme=on,pclmuldq=on,x2apic=on,hypervisor=on,arat=on -m 
size=2097152k,slots=16,maxmem=4294967296k -realtime mlock=off -smp 
4,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-3,mem=2048 
-uuid 4aff4193-ba75-481d-92b3-59b62cd8b111 -smbios 
'type=1,manufacturer=oVirt,product=oVirt 
Node,version=7-4.1708.el7.centos,serial=32393735-3933-5A43-4A32-34333046564B,uuid=4aff4193-ba75-481d-92b3-59b62cd8b111'
 -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1932-bbgas102/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=2018-06-11T09:21:55,driftfix=slew -global kvm-pit.lost_tick_p
 olicy=delay -no-hpet -no-shutdown -boot menu=on,splash-time=10000,strict=on 
-device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device 
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive 
if=none,id=drive-ide0-1-0,readonly=on -device 
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive 
file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/33235bbf-0156-421e-9391-0749247b6ba6/0d2949b6-af0f-4de2-b29a-10dcb39ad857,format=raw,if=none,id=drive-virtio-disk0,serial=33235bbf-0156-421e-9391-0749247b6ba6,cache=none,werror=stop,rerror=stop,aio=native
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive 
file=/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/5149067a-b18c-41cf-a355-033317291148/f0afa250-6704-410c-b9be-60b99cb28ce9,format=raw,if=none,id=dri
 
ve-virtio-disk1,serial=5149067a-b18c-41cf-a355-033317291148,cache=none,werror=stop,rerror=stop,aio=native
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1
 -netdev tap,fd=37,id=hostnet0,vhost=on,vhostfd=40 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:bd:ed:0a,bus=pci.0,addr=0x7 
-netdev tap,fd=42,id=hostnet1,vhost=on,vhostfd=43 -device 
virtio-net-pci,netdev=hostnet1,id=net1,mac=00:1a:4a:bd:ed:26,bus=pci.0,addr=0x8 
-chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/4aff4193-ba75-481d-92b3-59b62cd8b111.com.redhat.rhevm.vdsm,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
 -chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/4aff4193-ba75-481d-92b3-59b62cd8b111.org.qemu.guest_agent.0,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
 -device usb-tablet,id
 =input0,bus=usb.0,port=1 -vnc 10.157.8.40:3,password -k de -device 
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6 -msg 
timestamp=on
2018-06-11T09:21:56.171158Z qemu-kvm: warning: CPU(s) not present in any NUMA 
nodes: 4 5 6 7 8 9 10 11 12 13 14 15
2018-06-11T09:21:56.171319Z qemu-kvm: warning: All CPU(s) up to maxcpus should 
be described in NUMA config

qemu.log on source:
2018-06-11 09:03:32.701+0000: initiating migration
2018-06-11 09:10:39.758+0000: initiating migration
2018-06-11 09:13:04.323+0000: initiating migration
2018-06-11 09:18:32.877+0000: initiating migration
2018-06-11 09:21:56.308+0000: initiating migration

vdsm.log on source:
2018-06-11 11:21:54,331+0200 ERROR (migsrc/9d52cd9b) [virt.vm] 
(vmId='9d52cd9b-919d-40ff-8036-5f94f6b02019') Operation abgebrochen: 
Migrations-Job: abgebrochen durch Client (migration:287)
2018-06-11 11:21:54,517+0200 ERROR (migsrc/9d52cd9b) [virt.vm] 
(vmId='9d52cd9b-919d-40ff-8036-5f94f6b02019') Failed to migrate (migration:429)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 411, in 
run
    self._startUnderlyingMigration(time.time())
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 489, in 
_startUnderlyingMigration
    self._perform_with_downtime_thread(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 556, in 
_perform_with_downtime_thread
    self._perform_migration(duri, muri)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/migration.py", line 529, in 
_perform_migration
    self._vm._dom.migrateToURI3(duri, params, flags)
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 69, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 123, 
in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1006, in wrapper
    return func(inst, *args, **kwargs)
  File "/usr/lib64/python2.7/site-packages/libvirt.py", line 1679, in 
migrateToURI3
    if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', 
dom=self)
libvirtError: Operation abgebrochen: Migrations-Job: abgebrochen durch Client

vsdm.log on destination:
2018-06-11 11:21:57,317+0200 INFO  (vm/a54af7cd) [vdsm.api] FINISH prepareImage 
return={'info': {'path': 
u'/rhev/data-center/mnt/blockSD/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/1fe4d656-8f59-4221-a859-
2018-06-11 11:21:57,318+0200 INFO  (vm/a54af7cd) [vds] prepared volume path: 
/rhev/data-center/5849b030-626e-47cb-ad90-3ce782d831b3/68b8aaff-770d-4a08-800b-0c15e94efaa8/images/1fe4d656-8f59-4221-a859-2f7bc
2018-06-11 11:21:57,361+0200 INFO  (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call 
VM.migrationCreate succeeded in 0.34 seconds (__init__:539)
2018-06-11 11:21:58,847+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2018-06-11 11:21:58,856+0200 ERROR (jsonrpc/4) [jsonrpc.JsonRpcServer] Internal 
server error (__init__:577)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in 
_handle_request
    res = method(**params)
  File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 202, in 
_dynamicMethod
    result = fn(*methodArgs)
  File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies
    io_tune_policies_dict = self._cif.getAllVmIoTunePolicies()
  File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies
    'current_values': v.getIoTune()}
  File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune
    result = self.getIoTuneResponse()
  File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse
    res = self._dom.blockIoTune(
  File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in 
__getattr__
    % self.vmid)
NotConnectedError: VM u'4aff4193-ba75-481d-92b3-59b62cd8b111' was not started 
yet or was shut down
2018-06-11 11:21:58,857+0200 INFO  (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmIoTunePolicies failed (error -32603) in 0.00 seconds (__init__:539)
2018-06-11 11:22:03,161+0200 INFO  (jsonrpc/2) [vdsm.api] START 
repoStats(options=None) from=::ffff:10.157.8.36,57852, 
task_id=24fa3f9e-9110-4da9-a41f-d385036d6fef (api:46)
2018-06-11 11:22:03,162+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH repoStats 
return={u'48055e27-f1ca-466a-8a2c-e191c34f0226': {'code': 0, 'actual': True, 
'version': 0, 'acquired': True, 'delay': '0.000301869
2018-06-11 11:22:03,180+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call 
Host.getStats succeeded in 0.02 seconds (__init__:539)
2018-06-11 11:22:06,251+0200 INFO  (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call 
Host.getAllVmStats succeeded in 0.01 seconds (__init__:539)
2018-06-11 11:22:09,028+0200 INFO  (periodic/3177) [vdsm.api] START 
repoStats(options=None) from=internal, 
task_id=5bae7d6e-588d-4098-b4b9-48ead80060eb (api:46)

Normally, there shouldn't be any modifications.

Thank you for your help
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YNA6ZCP7DF7GTDIPYMJY5EISO5TUSZJY/

Reply via email to