Hi,
let's retry with today's snapshots before investigate this (I will of course investigate and fix), because I had one green OST run before to merge patches: http://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/1533/parameters/ The caveat is that OST run 1533 includes both patches 81307 and 83641;with both, it should be fine, but with just 83641, I reckon it can break. I'll run plenty of OST tests today, so we will soon see if it is indeed broken or if it was just a glitch. Bests, On 11/07/2017 05:32 PM, Dafna Ron wrote: > > Hi, > > We have a failure in test 004_basic_sanity.disk_operations. > > From the log I can see that vdsm has a KeyError: 'diskReplicate' and > than reports the drive not found. > > Can you please check the issue? > > > ** > > *Link to suspected patches: https://gerrit.ovirt.org/#/c/83641/* > > * > > Link to Job: > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3653 > > > Link to all logs: > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3653/artifact/ > > engine log: > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3653/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log > > vdsm log: > > ** > > http://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/3653/artifact/exported-artifacts/basic-suit-master-el7/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-0/_var_log/vdsm/vdsm.log > > > > (Relevant) error snippet from the log: > > <error> > > > From engine log: > * > > 2017-11-07 09:20:42,396-05 ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] Failed > in 'VmReplicateDiskFinishVDS' method > 2017-11-07 09:20:42,399-05 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] > EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), VDSM lago-basic-suite-mas > ter-host-0 command VmReplicateDiskFinishVDS failed: Drive image file > could not be found > 2017-11-07 09:20:42,399-05 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] Command > 'org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDisk > FinishVDSCommand' return value 'StatusOnlyReturn [status=Status > [code=13, message=Drive image file could not be found]]' > 2017-11-07 09:20:42,399-05 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] > HostName = lago-basic-suite-master-host-0 > 2017-11-07 09:20:42,399-05 ERROR > [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] Command > 'VmReplicateDiskFinishVDSCommand(HostName = lago-basic-sui > te-master-host-0, > VmReplicateDiskParameters:{hostId='038b0c9a-f64c-4aa7-8e36-72e435771dab', > vmId='61d328bc-c920-4691-997f-09bbecb5de10', > storagePoolId='b20f0eb7-6bea-4994-b87b-e1080c4fd9e5', > srcStorageDomainId='4a088fb2-5497-4b4d-adec-4c9 > 1d998c56e', > targetStorageDomainId='4a088fb2-5497-4b4d-adec-4c91d998c56e', > imageGroupId='36837a9b-6396-4d65-a354-df97ce808f01', > imageId='8a1660d7-f153-4aca-9f25-c1184c10841a'})' execution failed: > VDSGenericException: VDSErrorException: Fai > led to VmReplicateDiskFinishVDS, error = Drive image file could not be > found, code = 13 > 2017-11-07 09:20:42,399-05 DEBUG > [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] > Exception: org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorExcep > tion: VDSGenericException: VDSErrorException: Failed to > VmReplicateDiskFinishVDS, error = Drive image file could not be found, > code = 13 > at > org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createDefaultConcreteException(VdsBrokerCommand.java:81) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.createException(BrokerCommandBase.java:223) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:193) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand.executeVdsBrokerCommand(VmReplicateDiskFinishVDSCommand.java:15) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:112) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:73) > [vdsbroker.jar:] > at > org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) > [dal.jar:] > at > org.ovirt.engine.core.vdsbroker.vdsbroker.DefaultVdsCommandExecutor.execute(DefaultVdsCommandExecutor.java:14) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:387) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand$$super(Unknown > Source) [vdsbroker.jar:] > at sun.reflect.GeneratedMethodAccessor244.invoke(Unknown > Source) [:1.8.0_151] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [rt.jar:1.8.0_151] > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_151] > at > org.jboss.weld.interceptor.proxy.TerminalAroundInvokeInvocationContext.proceedInternal(TerminalAroundInvokeInvocationContext.java:49) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.jboss.weld.interceptor.proxy.AroundInvokeInvocationContext.proceed(AroundInvokeInvocationContext.java:77) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.ovirt.engine.core.common.di.interceptor.LoggingInterceptor.apply(LoggingInterceptor.java:12) > [common.jar:] > at sun.reflect.GeneratedMethodAccessor69.invoke(Unknown > Source) [:1.8.0_151] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > [rt.jar:1.8.0_151] > at java.lang.reflect.Method.invoke(Method.java:498) > [rt.jar:1.8.0_151] > at > org.jboss.weld.interceptor.reader.SimpleInterceptorInvocation$SimpleMethodInvocation.invoke(SimpleInterceptorInvocation.java:73) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeAroundInvoke(InterceptorMethodHandler.java:84) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.executeInterception(InterceptorMethodHandler.java:72) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.jboss.weld.interceptor.proxy.InterceptorMethodHandler.invoke(InterceptorMethodHandler.java:56) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:79) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.jboss.weld.bean.proxy.CombinedInterceptorAndDecoratorStackMethodHandler.invoke(CombinedInterceptorAndDecoratorStackMethodHandler.java:68) > [weld-core-impl-2.4.3.Final.jar:2.4.3.Final] > at > org.ovirt.engine.core.vdsbroker.ResourceManager$Proxy$_$$_WeldSubclass.runVdsCommand(Unknown > Source) [vdsbroker.jar:] > at > org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.replicateDiskFinish(LiveMigrateDiskCommand.java:374) > [bll.jar:] > at > org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.handleDestDisk(LiveMigrateDiskCommand.java:681) > [bll.jar:] > at > org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand.endWithFailure(LiveMigrateDiskCommand.java:337) > [bll.jar:] > at > org.ovirt.engine.core.bll.CommandBase.internalEndWithFailure(CommandBase.java:783) > [bll.jar:] > at > org.ovirt.engine.core.bll.CommandBase.endActionInTransactionScope(CommandBase.java:712) > [bll.jar:] > at > org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:1989) > [bll.jar:] > at > org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) > [utils.jar:] > at > org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInRequired(TransactionSupport.java:137) > [utils.jar:] > at > org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:105) > [utils.jar:] > at > org.ovirt.engine.core.bll.CommandBase.endAction(CommandBase.java:574) > [bll.jar:] > at > org.ovirt.engine.core.bll.ChildCommandsCallbackBase.endAction(ChildCommandsCallbackBase.java:125) > [bll.jar:] > at > org.ovirt.engine.core.bll.ChildCommandsCallbackBase.onFailed(ChildCommandsCallbackBase.java:156) > [bll.jar:] > at > org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.endCallback(CommandCallbacksPoller.java:65) > [bll.jar:] > at > org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethodsImpl(CommandCallbacksPoller.java:141) > [bll.jar:] > at > org.ovirt.engine.core.bll.tasks.CommandCallbacksPoller.invokeCallbackMethods(CommandCallbacksPoller.java:106) > [bll.jar:] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_151] > at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [rt.jar:1.8.0_151] > at > org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) > [javax.enterprise.concurrent-1.0.jar:] > at > org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) > [javax.enterprise.concurrent-1.0.jar:] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_151] > : > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_151] > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151] > at > org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > at > org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > 2017-11-07 09:20:42,400-05 INFO > [org.ovirt.engine.core.vdsbroker.vdsbroker.VmReplicateDiskFinishVDSCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] FINISH, > VmReplicateDiskFinishVDSCommand, log id: 32c5a56e > 2017-11-07 09:20:42,400-05 DEBUG > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] method: > runVdsCommand, params: [VmReplicateDiskFinish, > VmReplicateDiskParameters:{hostId='038b0c9a-f64c-4aa7-8e36-72e435771dab', > vmId='61d328bc-c920-4691-997f-09bbecb5de10', > storagePoolId='b20f0eb7-6bea-4994-b87b-e1080c4fd9e5', > srcStorageDomainId='4a088fb2-5497-4b4d-adec-4c91d998c56e', > targetStorageDomainId='4a088fb2-5497-4b4d-adec-4c91d998c56e', > imageGroupId='36837a9b-6396-4d65-a354-df97ce808f01', > imageId='8a1660d7-f153-4aca-9f25-c1184c10841a'}], timeElapsed: 12ms > 2017-11-07 09:20:42,400-05 ERROR > [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] > Replication end of disk '36837a9b-6396-4d65-a354-df97ce808f01' in vm > '61d328bc-c920-4691-997f-09bbecb5de10' back to the source failed, > skipping deletion of the target disk > 2017-11-07 09:20:42,408-05 WARN > [org.ovirt.engine.core.bll.lock.InMemoryLockManager] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] Trying > to release exclusive lock which does not exist, lock key: > '61d328bc-c920-4691-997f-09bbecb5de10LIVE_STORAGE_MIGRATION' > 2017-11-07 09:20:42,408-05 INFO > [org.ovirt.engine.core.bll.storage.lsm.LiveMigrateDiskCommand] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] Lock > freed to object > 'EngineLock:{exclusiveLocks='[61d328bc-c920-4691-997f-09bbecb5de10=LIVE_STORAGE_MIGRATION]', > sharedLocks=''}' > 2017-11-07 09:20:42,428-05 DEBUG > [org.ovirt.engine.core.common.di.interceptor.DebugLoggingInterceptor] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] method: > get, params: [038b0c9a-f64c-4aa7-8e36-72e435771dab], timeElapsed: 2ms > 2017-11-07 09:20:42,430-05 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engineScheduled-Thread-72) [4bcdd94e] > EVENT_ID: USER_MOVED_DISK_FINISHED_FAILURE(2,011), User > admin@internal-authz have failed to move disk vm0_disk0 to domain iscsi.** > ** > > * > * > > * > * > > vdsm log: > > > 2017-11-07 09:20:41,178-0500 INFO (jsonrpc/6) [vdsm.api] FINISH > getVolumeSize return={'truesize': '2415919104', 'apparentsize': > '2415919104'} from=::ffff:192.168.204.4,49358, flow_id=4bcdd94e, > task_id=042481eb-74ac-42a8-9d1e-8fb1c84db234 > (api:52) > 2017-11-07 09:20:41,193-0500 ERROR (jsonrpc/6) [api] FINISH > diskReplicateFinish error='diskReplicate' (api:127) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line > 117, in method > ret = func(*args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 558, in > diskReplicateFinish > return self.vm.diskReplicateFinish(srcDisk, dstDisk) > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4477, > in diskReplicateFinish > self._delDiskReplica(drive) > File "/usr/lib/python2.7/site-packages/vdsm/virt/vm.py", line 4528, > in _delDiskReplica > del dev['diskReplicate'] > KeyError: 'diskReplicate' > 2017-11-07 09:20:41,195-0500 INFO (jsonrpc/6) [api.virt] FINISH > diskReplicateFinish return={'status': {'message': 'General Exception: > ("\'diskReplicate\'",)', 'code': 100}} > from=::ffff:192.168.204.4,49358, flow_id=4bcdd94e (api:52) > 2017-11-07 09:20:41,195-0500 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] > RPC call VM.diskReplicateFinish failed (error 100) in 0.79 seconds > (__init__:630) > 2017-11-07 09:20:41,253-0500 INFO (jsonrpc/2) [vdsm.api] START > finalizeMerge(spUUID=u'b20f0eb7-6bea-4994-b87b-e1080c4fd9e5', > subchainInfo={u'img_id': u'8401cd0f-419c-417c-8da3-52da285fa47c', > u'sd_id': u'4a088fb2-5497-4b4d-adec-4c91d998c5 > 6e', u'top_id': u'7a295270-fdf5-4561-b688-033c5d312d1a', u'base_id': > u'e77f8714-9e0f-4f39-8437-e75a3fa44c74'}) > from=::ffff:192.168.204.4,49380, > flow_id=56c9f051-c091-49cd-9fd5-cc98e92ef05f, > task_id=9ce348bc-e3fb-4f4e-aea0-f8c79e7ae204 (ap > i:46) > 2017-11-07 09:20:41,267-0500 INFO (jsonrpc/2) [vdsm.api] FINISH > finalizeMerge return=None from=::ffff:192.168.204.4,49380, > flow_id=56c9f051-c091-49cd-9fd5-cc98e92ef05f, > task_id=9ce348bc-e3fb-4f4e-aea0-f8c79e7ae204 (api:52) > 2017-11-07 09:20:41,285-0500 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] > RPC call StoragePool.finalizeMerge succeeded in 0.03 seconds > (__init__:630) > 2017-11-07 09:20:41,285-0500 INFO (tasks/8) > [storage.ThreadPool.WorkerThread] START task > 9ce348bc-e3fb-4f4e-aea0-f8c79e7ae204 (cmd=<bound method Task.commit of > <vdsm.storage.task.Task instance at 0x2f0a560>>, args=None) > (threadPool:208) > 2017-11-07 09:20:41,295-0500 INFO (tasks/8) [storage.merge] > Finalizing subchain after merge: <SubchainInfo > sd_id=4a088fb2-5497-4b4d-adec-4c91d998c56e, > img_id=8401cd0f-419c-417c-8da3-52da285fa47c, > top_id=7a295270-fdf5-4561-b688-033c5d312d > 1a, base_id=e77f8714-9e0f-4f39-8437-e75a3fa44c74 base_generation=None > at 0x30a7990> (merge:254) > 2017-11-07 09:20:41,332-0500 INFO (tasks/8) [storage.SANLock] > Acquiring Lease(name='e77f8714-9e0f-4f39-8437-e75a3fa44c74', > path=u'/rhev/data-center/mnt/192.168.204.4:_exports_nfs_share1/4a088fb2-5497-4b4d-adec-4c91d998c56e/images/8401cd0 > f-419c-417c-8da3-52da285fa47c/e77f8714-9e0f-4f39-8437-e75a3fa44c74.lease', > offset=0) for host id 1 (clusterlock:377) > 2017-11-07 09:20:41,345-0500 INFO (tasks/8) [storage.SANLock] > Successfully acquired > Lease(name='e77f8714-9e0f-4f39-8437-e75a3fa44c74', > path=u'/rhev/data-center/mnt/192.168.204.4:_exports_nfs_share1/4a088fb2-5497-4b4d-adec-4c91d998c56e/im > ages/8401cd0f-419c-417c-8da3-52da285fa47c/e77f8714-9e0f-4f39-8437-e75a3fa44c74.lease', > offset=0) for host id 1 (clusterlock:415) > 2017-11-07 09:20:41,372-0500 INFO (tasks/8) [storage.VolumeManifest] > Volume: preparing volume > 4a088fb2-5497-4b4d-adec-4c91d998c56e/e77f8714-9e0f-4f39-8437-e75a3fa44c74 > (volume:558) > 2017-11-07 09:20:41,377-0500 INFO (tasks/8) [storage.VolumeManifest] > Volume: preparing volume > 4a088fb2-5497-4b4d-adec-4c91d998c56e/7a295270-fdf5-4561-b688-033c5d312d1a > (volume:558) > 2017-11-07 09:20:41,395-0500 INFO (tasks/8) [storage.VolumeManifest] > Volume: preparing volume > 4a088fb2-5497-4b4d-adec-4c91d998c56e/a6cb7a60-e500-4a75-9da8-9e7cce3e21ac > (volume:558) > 2017-11-07 09:20:41,399-0500 INFO (tasks/8) [storage.VolumeManifest] > sdUUID=4a088fb2-5497-4b4d-adec-4c91d998c56e > imgUUID=8401cd0f-419c-417c-8da3-52da285fa47c volUUID = > 7a295270-fdf5-4561-b688-033c5d312d1a legality = ILLEGAL (volume:399) > 2017-11-07 09:20:41,667-0500 INFO (tasks/8) [storage.merge] Updating > Vdsm metadata, syncing new chain: > ['e77f8714-9e0f-4f39-8437-e75a3fa44c74', > u'a6cb7a60-e500-4a75-9da8-9e7cce3e21ac'] (merge:311) > 2017-11-07 09:20:41,690-0500 INFO (tasks/8) [storage.Image] Current > chain=e77f8714-9e0f-4f39-8437-e75a3fa44c74 < > 7a295270-fdf5-4561-b688-033c5d312d1a < > a6cb7a60-e500-4a75-9da8-9e7cce3e21ac (top) (image:1162) > 2017-11-07 09:20:41,690-0500 INFO (tasks/8) [storage.Image] Unlinking > subchain: ['7a295270-fdf5-4561-b688-033c5d312d1a'] (image:1172) > 2017-11-07 09:20:41,709-0500 INFO (tasks/8) [storage.Image] Setting > parent of volume a6cb7a60-e500-4a75-9da8-9e7cce3e21ac to > e77f8714-9e0f-4f39-8437-e75a3fa44c74 (image:1185) > 2017-11-07 09:20:41,824-0500 INFO (tasks/8) [storage.SANLock] > Releasing Lease(name='e77f8714-9e0f-4f39-8437-e75a3fa44c74', > path=u'/rhev/data-center/mnt/192.168.204.4:_exports_nfs_share1/4a088fb2-5497-4b4d-adec-4c91d998c56e/images/8401cd0 > f-419c-417c-8da3-52da285fa47c/e77f8714-9e0f-4f39-8437-e75a3fa44c74.lease', > offset=0) (clusterlock:435) > 2017-11-07 09:20:41,828-0500 INFO (tasks/8) [storage.SANLock] > Successfully released > Lease(name='e77f8714-9e0f-4f39-8437-e75a3fa44c74', > path=u'/rhev/data-center/mnt/192.168.204.4:_exports_nfs_share1/4a088fb2-5497-4b4d-adec-4c91d998c56e/im > ages/8401cd0f-419c-417c-8da3-52da285fa47c/e77f8714-9e0f-4f39-8437-e75a3fa44c74.lease', > offset=0) (clusterlock:444) > 2017-11-07 09:20:41,842-0500 INFO (tasks/8) > [storage.ThreadPool.WorkerThread] FINISH task > 9ce348bc-e3fb-4f4e-aea0-f8c79e7ae204 (threadPool:210) > 2017-11-07 09:20:42,388-0500 INFO (jsonrpc/4) [api.virt] START > diskReplicateFinish(srcDisk={u'device': u'disk', u'poolID': > u'b20f0eb7-6bea-4994-b87b-e1080c4fd9e5', u'volumeID': > u'8a1660d7-f153-4aca-9f25-c1184c10841a', u'domainID': u'4a08 > 8fb2-5497-4b4d-adec-4c91d998c56e', u'imageID': > u'36837a9b-6396-4d65-a354-df97ce808f01'}, dstDisk={u'device': u'disk', > u'poolID': u'b20f0eb7-6bea-4994-b87b-e1080c4fd9e5', u'volumeID': > u'8a1660d7-f153-4aca-9f25-c1184c10841a', u'domainID': u > '4a088fb2-5497-4b4d-adec-4c91d998c56e', u'imageID': > u'36837a9b-6396-4d65-a354-df97ce808f01'}) > from=::ffff:192.168.204.4,49358, flow_id=4bcdd94e (api:46) > 2017-11-07 09:20:42,388-0500 ERROR (jsonrpc/4) [virt.vm] > (vmId='61d328bc-c920-4691-997f-09bbecb5de10') Drive not found > (srcDisk: {u'device': u'disk', u'poolID': > u'b20f0eb7-6bea-4994-b87b-e1080c4fd9e5', u'volumeID': > u'8a1660d7-f153-4aca-9f > 25-c1184c10841a', u'domainID': > u'4a088fb2-5497-4b4d-adec-4c91d998c56e', u'imageID': > u'36837a9b-6396-4d65-a354-df97ce808f01'}) (vm:4386) > 2017-11-07 09:20:42,389-0500 INFO (jsonrpc/4) [api.virt] FINISH > diskReplicateFinish return={'status': {'message': 'Drive image file > could not be found', 'code': 13}} from=::ffff:192.168.204.4,49358, > flow_id=4bcdd94e (api:52) > 2017-11-07 09:20:42,389-0500 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] > RPC call VM.diskReplicateFinish failed (error 13) in 0.01 seconds > (__init__:630)** > : > * > * > > *</error>* > > ** -- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh
_______________________________________________ Devel mailing list Devel@ovirt.org http://lists.ovirt.org/mailman/listinfo/devel