To provide a slight update on this.

I put one of my hosts into maintenance and it then migrated the two VM's
off of it, I then upgraded the host to 4.3.

I have 12 VM's running on the remaining host, if I put it into maintenance
will it try migrate all 12 VM's at once or will it stagger them until they
are all migrated?

Thank you.

Regards.

Neil Wilson.






On Wed, Jul 10, 2019 at 9:44 AM Neil <nwilson...@gmail.com> wrote:

> Hi Michal,
>
> Thanks for assisting.
>
> I've just done as requested however nothing is logged in the engine.log at
> the time I click Migrate, below is the log and I hit the Migrate button
> about 4 times between 09:35 and 09:36 and nothing was logged about this...
>
> 2019-07-10 09:35:57,967+02 INFO
>  [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-14) []
> User trouble@internal successfully logged in with scopes: ovirt-app-admin
> ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~
> ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search
> ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate
> ovirt-ext=token:password-access
> 2019-07-10 09:35:58,012+02 INFO
>  [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-14)
> [2997034] Running command: CreateUserSessionCommand internal: false.
> 2019-07-10 09:35:58,021+02 INFO
>  [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (default task-14) [2997034] EVENT_ID: USER_VDC_LOGIN(30), User
> trouble@internal-authz connecting from '160.128.20.85' using session
> 'bv55G0wZznETUiQwjgjfUNje7wOsG4UDCuFunSslVeAFQkhdY2zzTY7du36ynTF5nW5U7JiPyr7gl9QDHfWuig=='
> logged in.
> 2019-07-10 09:36:58,304+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'default' is using 0 threads out of 1, 5 threads waiting for tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engine' is using 0 threads out of 500, 16 threads waiting for tasks and 0
> tasks in queue.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineScheduled' is using 0 threads out of 100, 100 threads waiting for
> tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'engineThreadMonitoring' is using 1 threads out of 1, 0 threads waiting for
> tasks.
> 2019-07-10 09:36:58,305+02 INFO
>  [org.ovirt.engine.core.bll.utils.ThreadPoolMonitoringService]
> (EE-ManagedThreadFactory-engineThreadMonitoring-Thread-1) [] Thread pool
> 'hostUpdatesChecker' is using 0 threads out of 5, 2 threads waiting for
> tasks.
>
> The same is observed in the vdsm.log too, below is the log during the
> attempted migration....
>
> 2019-07-10 09:39:57,034+0200 INFO  (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC
> call Host.getStats succeeded in 0.01 seconds (__init__:573)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [api.host] START getStats()
> from=::ffff:10.0.1.1,57934 (api:46)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] START
> repoStats(domains=()) from=::ffff:10.0.1.1,57934,
> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:46)
> 2019-07-10 09:39:57,994+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH repoStats
> return={u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code': 0, 'actual':
> True, 'version': 4, 'acquired': True, 'delay': '0.000194846', 'lastCheck':
> '2.4', 'valid': True}, u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0,
> 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000277154',
> 'lastCheck': '6.0', 'valid': True},
> u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0, 'actual': True,
> 'version': 4, 'acquired': True, 'delay': '0.000421988', 'lastCheck': '2.4',
> 'valid': True}, u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000228443',
> 'lastCheck': '2.4', 'valid': True}} from=::ffff:10.0.1.1,57934,
> task_id=e2529cfc-4293-42b4-91fa-7f5558e279dd (api:52)
> 2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] START
> multipath_health() from=::ffff:10.0.1.1,57934,
> task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:46)
> 2019-07-10 09:39:57,995+0200 INFO  (jsonrpc/2) [vdsm.api] FINISH
> multipath_health return={} from=::ffff:10.0.1.1,57934,
> task_id=fd7ad703-5096-4f09-99fa-54672cb4aad9 (api:52)
> 2019-07-10 09:39:58,002+0200 INFO  (jsonrpc/2) [api.host] FINISH getStats
> return={'status': {'message': 'Done', 'code': 0}, 'info': {'cpuStatistics':
> {'42': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle':
> '99.87'}, '43': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '24': {'cpuUser': '0.73', 'nodeIndex': 0, 'cpuSys':
> '0.07', 'cpuIdle': '99.20'}, '25': {'cpuUser': '0.07', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '99.93'}, '26': {'cpuUser': '5.59',
> 'nodeIndex': 0, 'cpuSys': '1.20', 'cpuIdle': '93.21'}, '27': {'cpuUser':
> '0.87', 'nodeIndex': 1, 'cpuSys': '0.60', 'cpuIdle': '98.53'}, '20':
> {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys': '0.13', 'cpuIdle': '99.34'},
> '21': {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle':
> '99.93'}, '22': {'cpuUser': '0.40', 'nodeIndex': 0, 'cpuSys': '0.20',
> 'cpuIdle': '99.40'}, '23': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys':
> '0.07', 'cpuIdle': '99.86'}, '46': {'cpuUser': '0.13', 'nodeIndex': 0,
> 'cpuSys': '0.00', 'cpuIdle': '99.87'}, '47': {'cpuUser': '0.00',
> 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '44': {'cpuUser':
> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '45':
> {'cpuUser': '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
> '28': {'cpuUser': '0.60', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle':
> '99.33'}, '29': {'cpuUser': '1.07', 'nodeIndex': 1, 'cpuSys': '0.20',
> 'cpuIdle': '98.73'}, '40': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys':
> '0.00', 'cpuIdle': '100.00'}, '41': {'cpuUser': '0.00', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '1': {'cpuUser': '1.07',
> 'nodeIndex': 1, 'cpuSys': '1.13', 'cpuIdle': '97.80'}, '0': {'cpuUser':
> '0.60', 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.20'}, '3':
> {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.73'},
> '2': {'cpuUser': '3.00', 'nodeIndex': 0, 'cpuSys': '0.53', 'cpuIdle':
> '96.47'}, '5': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys': '0.13',
> 'cpuIdle': '99.67'}, '4': {'cpuUser': '0.47', 'nodeIndex': 0, 'cpuSys':
> '0.20', 'cpuIdle': '99.33'}, '7': {'cpuUser': '0.40', 'nodeIndex': 1,
> 'cpuSys': '0.20', 'cpuIdle': '99.40'}, '6': {'cpuUser': '0.67',
> 'nodeIndex': 0, 'cpuSys': '0.20', 'cpuIdle': '99.13'}, '9': {'cpuUser':
> '0.47', 'nodeIndex': 1, 'cpuSys': '0.40', 'cpuIdle': '99.13'}, '8':
> {'cpuUser': '0.13', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle': '99.80'},
> '39': {'cpuUser': '0.33', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle':
> '99.54'}, '38': {'cpuUser': '0.07', 'nodeIndex': 0, 'cpuSys': '0.00',
> 'cpuIdle': '99.93'}, '11': {'cpuUser': '0.67', 'nodeIndex': 1, 'cpuSys':
> '0.27', 'cpuIdle': '99.06'}, '10': {'cpuUser': '0.13', 'nodeIndex': 0,
> 'cpuSys': '0.13', 'cpuIdle': '99.74'}, '13': {'cpuUser': '0.07',
> 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle': '99.86'}, '12': {'cpuUser':
> '0.07', 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '99.66'}, '15':
> {'cpuUser': '0.27', 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.60'},
> '14': {'cpuUser': '0.27', 'nodeIndex': 0, 'cpuSys': '0.07', 'cpuIdle':
> '99.66'}, '17': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.27',
> 'cpuIdle': '99.66'}, '16': {'cpuUser': '0.53', 'nodeIndex': 0, 'cpuSys':
> '0.07', 'cpuIdle': '99.40'}, '19': {'cpuUser': '0.00', 'nodeIndex': 1,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '18': {'cpuUser': '1.00',
> 'nodeIndex': 0, 'cpuSys': '0.27', 'cpuIdle': '98.73'}, '31': {'cpuUser':
> '0.00', 'nodeIndex': 1, 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '30':
> {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'},
> '37': {'cpuUser': '0.07', 'nodeIndex': 1, 'cpuSys': '0.07', 'cpuIdle':
> '99.86'}, '36': {'cpuUser': '0.00', 'nodeIndex': 0, 'cpuSys': '0.00',
> 'cpuIdle': '100.00'}, '35': {'cpuUser': '0.20', 'nodeIndex': 1, 'cpuSys':
> '0.33', 'cpuIdle': '99.47'}, '34': {'cpuUser': '0.00', 'nodeIndex': 0,
> 'cpuSys': '0.00', 'cpuIdle': '100.00'}, '33': {'cpuUser': '0.07',
> 'nodeIndex': 1, 'cpuSys': '0.13', 'cpuIdle': '99.80'}, '32': {'cpuUser':
> '0.00', 'nodeIndex': 0, 'cpuSys': '0.00', 'cpuIdle': '100.00'}},
> 'numaNodeMemFree': {'1': {'memPercent': 5, 'memFree': '94165'}, '0':
> {'memPercent': 22, 'memFree': '77122'}}, 'memShared': 0, 'haScore': 3400,
> 'thpState': 'always', 'ksmMergeAcrossNodes': True, 'vmCount': 2, 'memUsed':
> '11', 'storageDomains': {u'8a607f8a-542a-473c-bb18-25c05fe2a3d4': {'code':
> 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000194846',
> 'lastCheck': '2.4', 'valid': True},
> u'37b1a5d7-4e29-4763-9337-63c51dbc5fc8': {'code': 0, 'actual': True,
> 'version': 0, 'acquired': True, 'delay': '0.000277154', 'lastCheck': '6.0',
> 'valid': True}, u'2558679a-2214-466b-8f05-06fdda9146e5': {'code': 0,
> 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000421988',
> 'lastCheck': '2.4', 'valid': True},
> u'640a5875-3d82-43c0-860f-7bb3e4a7e6f0': {'code': 0, 'actual': True,
> 'version': 4, 'acquired': True, 'delay': '0.000228443', 'lastCheck': '2.4',
> 'valid': True}}, 'incomingVmMigrations': 0, 'network': {'em4': {'txErrors':
> '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'em4', 'tx':
> '2160', 'txDropped': '0', 'rx': '261751836', 'rxErrors': '0', 'speed':
> '1000', 'rxDropped': '1'}, 'ovirtmgmt': {'txErrors': '0', 'state': 'up',
> 'sampleTime': 1562744396.40508, 'name': 'ovirtmgmt', 'tx': '193005142',
> 'txDropped': '0', 'rx': '4300879104', 'rxErrors': '0', 'speed': '1000',
> 'rxDropped': '478'}, 'restores': {'txErrors': '0', 'state': 'up',
> 'sampleTime': 1562744396.40508, 'name': 'restores', 'tx': '1362',
> 'txDropped': '0', 'rx': '226442665', 'rxErrors': '0', 'speed': '1000',
> 'rxDropped': '478'}, 'em2': {'txErrors': '0', 'state': 'down',
> 'sampleTime': 1562744396.40508, 'name': 'em2', 'tx': '0', 'txDropped': '0',
> 'rx': '0', 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'vnet0':
> {'txErrors': '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name':
> 'vnet0', 'tx': '2032610435', 'txDropped': '686', 'rx': '4287479548',
> 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, ';vdsmdummy;':
> {'txErrors': '0', 'state': 'down', 'sampleTime': 1562744396.40508, 'name':
> ';vdsmdummy;', 'tx': '0', 'txDropped': '0', 'rx': '0', 'rxErrors': '0',
> 'speed': '1000', 'rxDropped': '0'}, 'em1': {'txErrors': '0', 'state': 'up',
> 'sampleTime': 1562744396.40508, 'name': 'em1', 'tx': '4548433238',
> 'txDropped': '0', 'rx': '6476729588', 'rxErrors': '0', 'speed': '1000',
> 'rxDropped': '1'}, 'em3': {'txErrors': '0', 'state': 'down', 'sampleTime':
> 1562744396.40508, 'name': 'em3', 'tx': '0', 'txDropped': '0', 'rx': '0',
> 'rxErrors': '0', 'speed': '1000', 'rxDropped': '0'}, 'lo': {'txErrors':
> '0', 'state': 'up', 'sampleTime': 1562744396.40508, 'name': 'lo', 'tx':
> '397962377', 'txDropped': '0', 'rx': '397962377', 'rxErrors': '0', 'speed':
> '1000', 'rxDropped': '0'}, 'vnet1': {'txErrors': '0', 'state': 'up',
> 'sampleTime': 1562744396.40508, 'name': 'vnet1', 'tx': '526185708',
> 'txDropped': '0', 'rx': '118512222', 'rxErrors': '0', 'speed': '1000',
> 'rxDropped': '0'}}, 'txDropped': '686', 'anonHugePages': '18532',
> 'ksmPages': 100, 'elapsedTime': '85176.64', 'cpuLoad': '0.06', 'cpuSys':
> '0.17', 'diskStats': {'/var/log': {'free': '6850'}, '/var/run/vdsm/':
> {'free': '96410'}, '/tmp': {'free': '1825'}}, 'cpuUserVdsmd': '1.07',
> 'netConfigDirty': 'False', 'memCommitted': 24706, 'ksmState': False,
> 'vmMigrating': 0, 'ksmCpu': 0, 'memAvailable': 166010, 'bootTime':
> '1562659184', 'haStats': {'active': True, 'configured': True, 'score':
> 3400, 'localMaintenance': False, 'globalMaintenance': False}, 'momStatus':
> 'active', 'multipathHealth': {}, 'rxDropped': '958',
> 'outgoingVmMigrations': 0, 'swapTotal': 4095, 'swapFree': 4095,
> 'hugepages': defaultdict(<type 'dict'>, {1048576: {'resv_hugepages': 0,
> 'free_hugepages': 0, 'nr_overcommit_hugepages': 0, 'surplus_hugepages': 0,
> 'vm.free_hugepages': 0, 'nr_hugepages': 0, 'nr_hugepages_mempolicy': 0},
> 2048: {'resv_hugepages': 0, 'free_hugepages': 0, 'nr_overcommit_hugepages':
> 0, 'surplus_hugepages': 0, 'vm.free_hugepages': 0, 'nr_hugepages': 0,
> 'nr_hugepages_mempolicy': 0}}), 'dateTime': '2019-07-10T07:39:57 GMT',
> 'cpuUser': '0.44', 'memFree': 172451, 'cpuIdle': '99.39', 'vmActive': 2,
> 'v2vJobs': {}, 'cpuSysVdsmd': '0.60'}} from=::ffff:10.0.1.1,57934 (api:52)
> 2019-07-10 09:39:58,004+0200 INFO  (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC
> call Host.getStats succeeded in 0.01 seconds (__init__:573)
>
> Please let me know if you need further info.
>
> Thank you.
>
> Regards.
>
> Neil Wilson
>
>
> On Tue, Jul 9, 2019 at 5:52 PM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>> Can you share the engine.log please? And highlight the exact time when
>> you attempt that migrate action
>>
>> Thanks,
>> michal
>>
>> > On 9 Jul 2019, at 16:42, Neil <nwilson...@gmail.com> wrote:
>> >
>> > --000000000000166784058d409302
>> > Content-Type: text/plain; charset="UTF-8"
>> > Content-Transfer-Encoding: quoted-printable
>> >
>> > I remember seeing the bug earlier but because it was closed thought it
>> was
>> > unrelated, this appears to be it....
>> >
>> > https://bugzilla.redhat.com/show_bug.cgi?id=3D1670701
>> >
>> > Perhaps I'm not understanding your question about the VM guest agent,
>> but I
>> > don't have any guest agent currently installed on the VM, not sure if
>> the
>> > output of my qemu-kvm process maybe answers this question?....
>> >
>> > /usr/libexec/qemu-kvm -name
>> guest=3DHeadoffice.cbl-ho.local,debug-threads=
>> > =3Don
>> > -S -object
>> >
>> secret,id=3DmasterKey0,format=3Draw,file=3D/var/lib/libvirt/qemu/domain-1-H=
>> > eadoffice.cbl-ho.lo/master-key.aes
>> > -machine
>> pc-i440fx-rhel7.3.0,accel=3Dkvm,usb=3Doff,dump-guest-core=3Doff -c=
>> > pu
>> >
>> Broadwell,vme=3Don,f16c=3Don,rdrand=3Don,hypervisor=3Don,arat=3Don,xsaveopt=
>> > =3Don,abm=3Don,rtm=3Don,hle=3Don
>> > -m 8192 -realtime mlock=3Doff -smp
>> 8,maxcpus=3D64,sockets=3D16,cores=3D4,th=
>> > reads=3D1
>> > -numa node,nodeid=3D0,cpus=3D0-7,mem=3D8192 -uuid
>> > 9a6561b8-5702-43dc-9e92-1dc5dfed4eef -smbios
>> > type=3D1,manufacturer=3DoVirt,product=3DoVirt
>> >
>> Node,version=3D7-3.1611.el7.centos,serial=3D4C4C4544-0034-5810-8033-C2C04F4=
>> > E4B32,uuid=3D9a6561b8-5702-43dc-9e92-1dc5dfed4eef
>> > -no-user-config -nodefaults -chardev
>> > socket,id=3Dcharmonitor,fd=3D31,server,nowait -mon
>> > chardev=3Dcharmonitor,id=3Dmonitor,mode=3Dcontrol -rtc
>> > base=3D2019-07-09T10:26:53,driftfix=3Dslew -global
>> > kvm-pit.lost_tick_policy=3Ddelay -no-hpet -no-shutdown -boot strict=3Don
>> > -device piix3-usb-uhci,id=3Dusb,bus=3Dpci.0,addr=3D0x1.0x2 -device
>> > virtio-scsi-pci,id=3Dscsi0,bus=3Dpci.0,addr=3D0x4 -device
>> >
>> virtio-serial-pci,id=3Dvirtio-serial0,max_ports=3D16,bus=3Dpci.0,addr=3D0x5=
>> > -drive
>> > if=3Dnone,id=3Ddrive-ide0-1-0,readonly=3Don -device
>> > ide-cd,bus=3Dide.1,unit=3D0,drive=3Ddrive-ide0-1-0,id=3Dide0-1-0 -drive
>> >
>> file=3D/rhev/data-center/59831b91-00a5-01e4-0294-000000000018/8a607f8a-542a=
>> >
>> -473c-bb18-25c05fe2a3d4/images/56e8240c-a172-4f52-b0c1-2bddc4f34f93/9f24546=
>> >
>> 7-d31d-4f5a-8037-7c5012a4aa84,format=3Dqcow2,if=3Dnone,id=3Ddrive-virtio-di=
>> >
>> sk0,serial=3D56e8240c-a172-4f52-b0c1-2bddc4f34f93,werror=3Dstop,rerror=3Dst=
>> > op,cache=3Dnone,aio=3Dnative
>> > -device
>> >
>> virtio-blk-pci,scsi=3Doff,bus=3Dpci.0,addr=3D0x7,drive=3Ddrive-virtio-disk0=
>> > ,id=3Dvirtio-disk0,bootindex=3D1,write-cache=3Don
>> > -netdev tap,fd=3D33,id=3Dhostnet0,vhost=3Don,vhostfd=3D34 -device
>> >
>> virtio-net-pci,netdev=3Dhostnet0,id=3Dnet0,mac=3D00:1a:4a:16:01:5b,bus=3Dpc=
>> > i.0,addr=3D0x3
>> > -chardev socket,id=3Dcharchannel0,fd=3D35,server,nowait -device
>> >
>> virtserialport,bus=3Dvirtio-serial0.0,nr=3D1,chardev=3Dcharchannel0,id=3Dch=
>> > annel0,name=3Dcom.redhat.rhevm.vdsm
>> > -chardev socket,id=3Dcharchannel1,fd=3D36,server,nowait -device
>> >
>> virtserialport,bus=3Dvirtio-serial0.0,nr=3D2,chardev=3Dcharchannel1,id=3Dch=
>> > annel1,name=3Dorg.qemu.guest_agent.0
>> > -chardev spicevmc,id=3Dcharchannel2,name=3Dvdagent -device
>> >
>> virtserialport,bus=3Dvirtio-serial0.0,nr=3D3,chardev=3Dcharchannel2,id=3Dch=
>> > annel2,name=3Dcom.redhat.spice.0
>> > -spice
>> >
>> tls-port=3D5900,addr=3D10.0.1.11,x509-dir=3D/etc/pki/vdsm/libvirt-spice,tls=
>> >
>> -channel=3Ddefault,tls-channel=3Dmain,tls-channel=3Ddisplay,tls-channel=3Di=
>> >
>> nputs,tls-channel=3Dcursor,tls-channel=3Dplayback,tls-channel=3Drecord,tls-=
>> > channel=3Dsmartcard,tls-channel=3Dusbredir,seamless-migration=3Don
>> > -device
>> >
>> qxl-vga,id=3Dvideo0,ram_size=3D67108864,vram_size=3D8388608,vram64_size_mb=
>> > =3D0,vgamem_mb=3D16,max_outputs=3D1,bus=3Dpci.0,addr=3D0x2
>> > -incoming defer -device
>> virtio-balloon-pci,id=3Dballoon0,bus=3Dpci.0,addr=
>> > =3D0x6
>> > -object rng-random,id=3Dobjrng0,filename=3D/dev/urandom -device
>> > virtio-rng-pci,rng=3Dobjrng0,id=3Drng0,bus=3Dpci.0,addr=3D0x8 -sandbox
>> >
>> on,obsolete=3Ddeny,elevateprivileges=3Ddeny,spawn=3Ddeny,resourcecontrol=3D=
>> > deny
>> > -msg timestamp=3Don
>> >
>> > Please shout if you need further info.
>> >
>> > Thanks.
>> >
>> >
>> >
>> >
>> >
>> >
>> > On Tue, Jul 9, 2019 at 4:17 PM Strahil Nikolov <hunter86...@yahoo.com>
>> > wrote:
>> >
>> >> Shouldn't cause that problem.
>> >>
>> >> You have to find the bug in bugzilla and report a regression (if it's
>> not
>> >> closed) , or open a new one and report the regression.
>> >> As far as I remember , only the dashboard was affected due to new
>> feature=
>> > s
>> >> about vdo disk savings.
>> >>
>> >> About the VM - this should be another issue. What agent are you using
>> in
>> >> the VMs (ovirt or qemu) ?
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >> =D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9
>> =D1=8E=D0=BB=D0=B8 2=
>> > 019 =D0=B3., 10:09:05 =D1=87.
>> =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4,=
>> > Neil <
>> >> nwilson...@gmail.com> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
>> >>
>> >>
>> >> Hi Strahil,
>> >>
>> >> Thanks for the quick reply.
>> >> I put the cluster into global maintenance, then installed the 4.3 repo,
>> >> then "yum update ovirt\*setup\*"  then "engine-upgrade-check",
>> >> "engine-setup", then "yum update", once completed, I rebooted the
>> >> hosted-engine VM, and took the cluster out of global maintenance.
>> >>
>> >> Thinking back to the upgrade from 4.1 to 4.2 I don't recall doing a
>> "yum
>> >> update" after doing the engine-setup, not sure if this would cause it
>> >> perhaps?
>> >>
>> >> Thank you.
>> >> Regards.
>> >> Neil Wilson.
>> >>
>> >> On Tue, Jul 9, 2019 at 3:47 PM Strahil Nikolov <hunter86...@yahoo.com>
>> >> wrote:
>> >>
>> >> Hi Neil,
>> >>
>> >> for "Could not fetch data needed for VM migrate operation" - there was
>> a
>> >> bug and it was fixed.
>> >> Are you sure you have fully updated ?
>> >> What procedure did you use ?
>> >>
>> >> Best Regards,
>> >> Strahil Nikolov
>> >>
>> >> =D0=92 =D0=B2=D1=82=D0=BE=D1=80=D0=BD=D0=B8=D0=BA, 9
>> =D1=8E=D0=BB=D0=B8 2=
>> > 019 =D0=B3., 7:26:21 =D1=87.
>> =D0=93=D1=80=D0=B8=D0=BD=D1=83=D0=B8=D1=87-4, =
>> > Neil <nwilson...@gmail.com>
>> >> =D0=BD=D0=B0=D0=BF=D0=B8=D1=81=D0=B0:
>> >>
>> >>
>> >> Hi guys.
>> >>
>> >> I have two problems since upgrading from 4.2.x to 4.3.4
>> >>
>> >> First issue is I can no longer manually migrate VM's between hosts, I
>> get
>> >> an error in the ovirt GUI that says "Could not fetch data needed for VM
>> >> migrate operation" and nothing gets logged either in my engine.log or
>> my
>> >> vdsm.log
>> >>
>> >> Then the other issue is my Dashboard says the following "Error! Could
>> not
>> >> fetch dashboard data. Please ensure that data warehouse is properly
>> >> installed and configured."
>> >>
>> >> If I look at my ovirt-engine-dwhd.log I see the following if I try
>> restar=
>> > t
>> >> the dwh service...
>> >>
>> >> 2019-07-09 11:48:04|ETL Service Started
>> >> ovirtEngineDbDriverClass|org.postgresql.Driver
>> >>
>> >>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt=
>> > _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> hoursToKeepDaily|0
>> >> hoursToKeepHourly|720
>> >> ovirtEngineDbPassword|**********************
>> >> runDeleteTime|3
>> >>
>> >>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa=
>> > ctory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> runInterleave|60
>> >> limitRows|limit 1000
>> >> ovirtEngineHistoryDbUser|ovirt_engine_history
>> >> ovirtEngineDbUser|engine
>> >> deleteIncrement|10
>> >> timeBetweenErrorEvents|300000
>> >> hoursToKeepSamples|24
>> >> deleteMultiplier|1000
>> >> lastErrorSent|2011-07-03 12:46:47.000000
>> >> etlVersion|4.3.0
>> >> dwhAggregationDebug|false
>> >> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> >> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> >> ovirtEngineHistoryDbPassword|**********************
>> >> 2019-07-09 11:48:10|ETL Service Stopped
>> >> 2019-07-09 11:49:59|ETL Service Started
>> >> ovirtEngineDbDriverClass|org.postgresql.Driver
>> >>
>> >>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt=
>> > _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> hoursToKeepDaily|0
>> >> hoursToKeepHourly|720
>> >> ovirtEngineDbPassword|**********************
>> >> runDeleteTime|3
>> >>
>> >>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa=
>> > ctory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> runInterleave|60
>> >> limitRows|limit 1000
>> >> ovirtEngineHistoryDbUser|ovirt_engine_history
>> >> ovirtEngineDbUser|engine
>> >> deleteIncrement|10
>> >> timeBetweenErrorEvents|300000
>> >> hoursToKeepSamples|24
>> >> deleteMultiplier|1000
>> >> lastErrorSent|2011-07-03 12:46:47.000000
>> >> etlVersion|4.3.0
>> >> dwhAggregationDebug|false
>> >> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> >> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> >> ovirtEngineHistoryDbPassword|**********************
>> >> 2019-07-09 11:52:56|ETL Service Stopped
>> >> 2019-07-09 11:52:57|ETL Service Started
>> >> ovirtEngineDbDriverClass|org.postgresql.Driver
>> >>
>> >>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt=
>> > _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> hoursToKeepDaily|0
>> >> hoursToKeepHourly|720
>> >> ovirtEngineDbPassword|**********************
>> >> runDeleteTime|3
>> >>
>> >>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa=
>> > ctory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> runInterleave|60
>> >> limitRows|limit 1000
>> >> ovirtEngineHistoryDbUser|ovirt_engine_history
>> >> ovirtEngineDbUser|engine
>> >> deleteIncrement|10
>> >> timeBetweenErrorEvents|300000
>> >> hoursToKeepSamples|24
>> >> deleteMultiplier|1000
>> >> lastErrorSent|2011-07-03 12:46:47.000000
>> >> etlVersion|4.3.0
>> >> dwhAggregationDebug|false
>> >> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> >> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> >> ovirtEngineHistoryDbPassword|**********************
>> >> 2019-07-09 12:16:01|ETL Service Stopped
>> >> 2019-07-09 12:16:45|ETL Service Started
>> >> ovirtEngineDbDriverClass|org.postgresql.Driver
>> >>
>> >>
>> ovirtEngineHistoryDbJdbcConnection|jdbc:postgresql://localhost:5432/ovirt=
>> > _engine_history?sslfactory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> hoursToKeepDaily|0
>> >> hoursToKeepHourly|720
>> >> ovirtEngineDbPassword|**********************
>> >> runDeleteTime|3
>> >>
>> >>
>> ovirtEngineDbJdbcConnection|jdbc:postgresql://localhost:5432/engine?sslfa=
>> > ctory=3Dorg.postgresql.ssl.NonValidatingFactory
>> >> runInterleave|60
>> >> limitRows|limit 1000
>> >> ovirtEngineHistoryDbUser|ovirt_engine_history
>> >> ovirtEngineDbUser|engine
>> >> deleteIncrement|10
>> >> timeBetweenErrorEvents|300000
>> >> hoursToKeepSamples|24
>> >> deleteMultiplier|1000
>> >> lastErrorSent|2011-07-03 12:46:47.000000
>> >> etlVersion|4.3.0
>> >> dwhAggregationDebug|false
>> >> dwhUuid|dca0ebd3-c58f-4389-a1f8-6aecc20b1316
>> >> ovirtEngineHistoryDbDriverClass|org.postgresql.Driver
>> >> ovirtEngineHistoryDbPassword|**********************
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> I have a hosted engine, and I have two hosts and my storage is FC
>> based.
>> >> The hosts are still running on 4.2 because I'm unable to migrate VM's
>> off=
>>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BPVRXXGDGZ6MYMCFUXLHBGVAO7BJJ75S/

Reply via email to