Re: [ovirt-users] Don't start vm
Hi, > > > > Hi, > > answer below > > > > > > Nir Soffer > > > > > > > > > > Hi, > > > > > > > > > > > > I attach the file. Below log in the vdsm.log.62.xz > > > > > > > > > > > > The given nonexistent disk has probably appeared after template > > > > removal > > > > > > from which it has been created. > > > > > > BUT it was independent and before problems was not, after template > > > > > > removal! > > > > > > The disk exists, but at it has changed ID! > > > > > > > > > > I don't understand this description. > > > > > > > > > > Can you describe to steps to reproduce this issue? > > > > > > > > > > Guessing from your description: > > > > > 1. Create vm with x disks > > > > > 2. Create template > > > > > 3. Create vm from template > > > > > 4. Remove template > > > > > ? > > > > > > > > Yes. > > > > 1. Create vm with x disks on the DS 3524 through FC (multipathd on > > vdsm) > > > > 2. Create template > > > > 3. Create vm (independent) from template > > > > 4. Start vm and job in the vm > > > > 5. Remove template > > > > 6. Stop vm > > > > 7. Don`t start vm with error > > > > > > Do you mean - start vm fail with error about missing lv? > Yes > > > > > > > > 8. seek it disk - #lsblk > > > > > > Can you share the output of lsblk both before and after you stop the vm? No, since vm does not start! > > > > > > > > 9. many command with block 253:20 > > > > > > Not sure what do you mean by that > > You did not explain what you mean Hm, ?? vgchange > > > > Note: do *not* activate all lvs using "vgchange -a y" > > > Only vdsm should activate its volumes. > > > > OK! If vm don`t start, how to take data from vm? > > Of course if you need to troubshoot the system, and the vm is not > running, there is no problem to access the lv directly. > > Even then, you should *not* activate all lvs in a vg using > > vgchange -a y > > But activate only the lv you want to access using > > lvchange -a y Thank you, did not know > > > > > > > > > > 10. mount finded lvm in lvm volume and save data > > > > > > Mount? how mount is related to lvm? > > > > VM disk is lv on vdsm. I mounted lv, inside lvm on lv vdsm! > > You mean you activate the lv on the host? Yes, intro LVM vdsm > > > > > 12. reboot all vdsm host > > > > 13. dont't find ID it disk! ID it disk changed! > > > > > > Please share output of lvs both before and after the vm is stopped. > > > > Before > > -/dev/9d53ecef-8bfc-470b-8867-836bfa7df137/ > fb8466c9-0867-4e73-8362-2c95eea89a83 > > After - > > /dev/9d53ecef-8bfc-470b-8867-836bfa7df137/33b905e2-23df-49a9- > b772-4ebda3b0cd22 > > This not the output of lvs, these are the symlinks to the active lvs. > > Can you share the lvs output before and after the vm is stopped? No, since the vm does not start! > > > > > Now the problem disk has again received old ID(lvdisplay), BUT I have > > already removed it! > > I'm not sure what you mean. Can you share the output of lvdisplay before > and after the operation? > Already is not present, I have removed this problem disk > > > > The Mysticism! > > > > At me 3 disks on 9 GB, 5 days ago I have removed them. Now I them see till > > now (lvdisplay on the vdsm host) Why? > > Did you update lvm cache using "pvscan --cache"? No. I should after each operation with disks, do in the console "pvscan - cache"? > > > > > In general my problem has begun that on Windows 2008 Vm the empty seat has > > come to an end. > > What do you mean by that? This is first problem! Second problem - don't start the vm. > > > I have expanded volume in web gui. When I began to expand > > a disk in VM - error. > > > lvdisplay on the vdsm host has shown the old size of > > a disk! > > Is the volume preallocated or thin provisioned? All my disks - prelocated. > > Preallocated volumes are extended when you modify the volume size in > engine ui (as you described). Thin provisioned volumes are extended > only when the available space is bellow a threshold, so the lv > size will not change after you modify the volume size. > > > > > Sometimes normally all works! > > Do you mean that now everything works? The VM it has been removed, since for a week I could not start it! Now all works on others VM. But there were 2 problems! My problem to understand, that I have made not so or it is a problem oVirt. I wish to use oVirt, but is not assured of it because of these problems! > > > > > How I understand, probably, "all ok" when you work with VM which are on > > SPM host??! > > Lost you here. It agree, a head around )) Thanks Roma ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Don't start vm
- Original Message - > From: "Roman Nikolayevich Drovalev" > To: "Nir Soffer" , "Users@ovirt.org List" > > Sent: Thursday, December 11, 2014 8:38:30 AM > Subject: Re: [ovirt-users] Don't start vm > > Nir Soffer написано 11.12.2014 10:02:02: > > > > Hi, > > > > > > I attach the file. Below log in the vdsm.log.62.xz > > > > > > The given nonexistent disk has probably appeared after template > removal > > > from which it has been created. > > > BUT it was independent and before problems was not, after template > > > removal! > > > The disk exists, but at it has changed ID! > > > > I don't understand this description. > > > > Can you describe to steps to reproduce this issue? > > > > Guessing from your description: > > 1. Create vm with x disks > > 2. Create template > > 3. Create vm from template > > 4. Remove template > > ? > > Yes. > 1. Create vm with x disks on the DS 3524 through FC (multipathd on vdsm) > 2. Create template > 3. Create vm (independent) from template > 4. Start vm and job in the vm > 5. Remove template > 6. Stop vm > 7. Don`t start vm with error Do you mean - start vm fail with error about missing lv? > 8. seek it disk - #lsblk Can you share the output of lsblk both before and after you stop the vm? > 9. many command with block 253:20 Not sure what do you mean by that > (kpartx -l /dev/ ..,kpartx -a /dev You should not use kpartx on vdsm volumes, the host does not care about partitions in the volumes. Only the guest should access them. > ..,lvm pvscan , lvm vgchange -a y , ...) On CentOS 7 (and Fedora), lvm is using now lvmetad daemon to cache lvm meta data. Unless you refresh the cache or disable usage of the daemon, lvm commands may return stale data. To refresh the cache, you can do: pvscan --cache Before running commands such as lvs, vgs or pvs To disable usage of lvmetad (this is how vdsm operates): lvs --config "global {use_lvmetad=0}" Note: do *not* activate all lvs using "vgchange -a y" Only vdsm should activate its volumes. > 10. mount finded lvm in lvm volume and save data Mount? how mount is related to lvm? > 12. reboot all vdsm host > 13. dont't find ID it disk! ID it disk changed! Please share output of lvs both before and after the vm is stopped. Nir ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Don't start vm
Nir Soffer написано 11.12.2014 10:02:02: > > Hi, > > > > I attach the file. Below log in the vdsm.log.62.xz > > > > The given nonexistent disk has probably appeared after template removal > > from which it has been created. > > BUT it was independent and before problems was not, after template > > removal! > > The disk exists, but at it has changed ID! > > I don't understand this description. > > Can you describe to steps to reproduce this issue? > > Guessing from your description: > 1. Create vm with x disks > 2. Create template > 3. Create vm from template > 4. Remove template > ? Yes. 1. Create vm with x disks on the DS 3524 through FC (multipathd on vdsm) 2. Create template 3. Create vm (independent) from template 4. Start vm and job in the vm 5. Remove template 6. Stop vm 7. Don`t start vm with error 8. seek it disk - #lsblk 9. many command with block 253:20 (kpartx -l /dev/ ..,kpartx -a /dev ..,lvm pvscan , lvm vgchange -a y , ...) 10. mount finded lvm in lvm volume and save data 12. reboot all vdsm host 13. dont't find ID it disk! ID it disk changed! > > > > > Nir Soffer написано 09.12.2014 15:07:51: > > > > > > > > > > Hi, > > > > My config: vdsm host - CentOS 7, oVirt 3.5 > > > > > > > > > Could you please share from hypervisor the /var/log/vdsm/vdsm.log > > too? > > > > > > > > my /var/log/vdsm/vdsm.log > > > > > > We need the full log - please attach here or open a bug and > > > attach the full log. > > > > > > > > > > > Thread-283375::DEBUG::2014-12-06 > > > > 21:20:40,219::stompReactor::163::yajsonrpc.StompServer::(send) Sending > > > > response > > > > > > You are using jsonrpc - please check if switching to xmlrpc solve > > > your issue. > > > > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,252::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: = ' > > > > WARNING: lvmetad is running but disabled. Restart lvmetad before > > enabling > > > > it!\n'; = 0 > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,253::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,254::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation > > 'lvm > > > > reload operation' released the operation mutex > > > > Thread-283376::WARNING::2014-12-06 > > > > 21:20:40,254::lvm::600::Storage.LVM::(getLv) lv: > > > > fb8466c9-0867-4e73-8362-2c95eea89a83 not found in lvs vg: > > > > 9d53ecef-8bfc-470b-8867-836bfa7df137 response > > > > Thread-283376::ERROR::2014-12-06 > > > > 21:20:40,254::task::866::Storage.TaskManager.Task::(_setError) > > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Unexpected error > > > > Traceback (most recent call last): > > > > File "/usr/share/vdsm/storage/task.py", line 873, in _run > > > > return fn(*args, **kargs) > > > > File "/usr/share/vdsm/logUtils.py", line 45, in wrapper > > > > res = f(*args, **kwargs) > > > > File "/usr/share/vdsm/storage/hsm.py", line 3099, in getVolumeSize > > > > apparentsize = str(dom.getVSize(imgUUID, volUUID)) > > > > File "/usr/share/vdsm/storage/blockSD.py", line 622, in getVSize > > > > size = lvm.getLV(self.sdUUID, volUUID).size > > > > File "/usr/share/vdsm/storage/lvm.py", line 893, in getLV > > > > raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) > > > > LogicalVolumeDoesNotExistError: Logical volume does not exist: > > > > (u'9d53ecef-8bfc-470b-8867-836bfa7df137/ > > > fb8466c9-0867-4e73-8362-2c95eea89a83',) > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,255::task::885::Storage.TaskManager.Task::(_run) > > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._run: > > > > cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd > > > > (u'9d53ecef-8bfc-470b-8867-836bfa7df137', > > > > u'0002-0002-0002-0002-010b', > > > > u'7deace0a-0c83-41c8-9122-84079ad949c2', > > > > u'fb8466c9-0867-4e73-8362-2c95eea89a83') {} failed - stopping task > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,255::task::1217::Storage.TaskManager.Task::(stop) > > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::stopping in state > > preparing > > > > (force False) > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,255::task::993::Storage.TaskManager.Task::(_decref) > > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 1 aborting True > > > > Thread-283376::INFO::2014-12-06 > > > > 21:20:40,255::task::1171::Storage.TaskManager.Task::(prepare) > > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::aborting: Task is > > aborted: > > > > 'Logical volume does not exist' - code 610 > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,255::task::1176::Storage.TaskManager.Task::(prepare) > > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Prepare: aborted: Logical > > > > volume does not exist > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,256::task::993::Storage.TaskManager.Task::(_decref) > > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 0 aborting True > > > > Thread-283376::DEBUG::2014-12-06 > > > > 21:20:40,256::
Re: [ovirt-users] Don't start vm
- Original Message - > From: "Roman Nikolayevich Drovalev" > To: "Nir Soffer" > Sent: Thursday, December 11, 2014 8:29:44 AM > Subject: Re: [ovirt-users] Don't start vm > > Hi, > > I attach the file. Below log in the vdsm.log.62.xz > > The given nonexistent disk has probably appeared after template removal > from which it has been created. > BUT it was independent and before problems was not, after template > removal! > The disk exists, but at it has changed ID! I don't understand this description. Can you describe to steps to reproduce this issue? Guessing from your description: 1. Create vm with x disks 2. Create template 3. Create vm from template 4. Remove template ? > > Nir Soffer написано 09.12.2014 15:07:51: > > > > > > > Hi, > > > My config: vdsm host - CentOS 7, oVirt 3.5 > > > > > > > Could you please share from hypervisor the /var/log/vdsm/vdsm.log > too? > > > > > > my /var/log/vdsm/vdsm.log > > > > We need the full log - please attach here or open a bug and > > attach the full log. > > > > > > > > Thread-283375::DEBUG::2014-12-06 > > > 21:20:40,219::stompReactor::163::yajsonrpc.StompServer::(send) Sending > > > response > > > > You are using jsonrpc - please check if switching to xmlrpc solve > > your issue. > > > > > Thread-283376::DEBUG::2014-12-06 > > > 21:20:40,252::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: = ' > > > WARNING: lvmetad is running but disabled. Restart lvmetad before > enabling > > > it!\n'; = 0 > > > Thread-283376::DEBUG::2014-12-06 > > > 21:20:40,253::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded > > > Thread-283376::DEBUG::2014-12-06 > > > 21:20:40,254::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation > 'lvm > > > reload operation' released the operation mutex > > > Thread-283376::WARNING::2014-12-06 > > > 21:20:40,254::lvm::600::Storage.LVM::(getLv) lv: > > > fb8466c9-0867-4e73-8362-2c95eea89a83 not found in lvs vg: > > > 9d53ecef-8bfc-470b-8867-836bfa7df137 response > > > Thread-283376::ERROR::2014-12-06 > > > 21:20:40,254::task::866::Storage.TaskManager.Task::(_setError) > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Unexpected error > > > Traceback (most recent call last): > > > File "/usr/share/vdsm/storage/task.py", line 873, in _run > > > return fn(*args, **kargs) > > > File "/usr/share/vdsm/logUtils.py", line 45, in wrapper > > > res = f(*args, **kwargs) > > > File "/usr/share/vdsm/storage/hsm.py", line 3099, in getVolumeSize > > > apparentsize = str(dom.getVSize(imgUUID, volUUID)) > > > File "/usr/share/vdsm/storage/blockSD.py", line 622, in getVSize > > > size = lvm.getLV(self.sdUUID, volUUID).size > > > File "/usr/share/vdsm/storage/lvm.py", line 893, in getLV > > > raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) > > > LogicalVolumeDoesNotExistError: Logical volume does not exist: > > > (u'9d53ecef-8bfc-470b-8867-836bfa7df137/ > > fb8466c9-0867-4e73-8362-2c95eea89a83',) > > > Thread-283376::DEBUG::2014-12-06 > > > 21:20:40,255::task::885::Storage.TaskManager.Task::(_run) > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._run: > > > cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd > > > (u'9d53ecef-8bfc-470b-8867-836bfa7df137', > > > u'0002-0002-0002-0002-010b', > > > u'7deace0a-0c83-41c8-9122-84079ad949c2', > > > u'fb8466c9-0867-4e73-8362-2c95eea89a83') {} failed - stopping task > > > Thread-283376::DEBUG::2014-12-06 > > > 21:20:40,255::task::1217::Storage.TaskManager.Task::(stop) > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::stopping in state > preparing > > > (force False) > > > Thread-283376::DEBUG::2014-12-06 > > > 21:20:40,255::task::993::Storage.TaskManager.Task::(_decref) > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 1 aborting True > > > Thread-283376::INFO::2014-12-06 > > > 21:20:40,255::task::1171::Storage.TaskManager.Task::(prepare) > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::aborting: Task is > aborted: > > > 'Logical volume does not exist' - code 610 > > > Thread-283376::DEBUG::2014-12-06 > > > 21:20:40,255::task::1176::Storage.TaskManager.Task::(prepare) > > > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Prepare:
Re: [ovirt-users] Don't start vm
- Original Message - > From: "Roman Nikolayevich Drovalev" > To: Users@ovirt.org > Sent: Saturday, December 6, 2014 8:28:08 PM > Subject: [ovirt-users] Don't start vm > > Hi, > My config: vdsm host - CentOS 7, oVirt 3.5 > > > Could you please share from hypervisor the /var/log/vdsm/vdsm.log too? > > my /var/log/vdsm/vdsm.log We need the full log - please attach here or open a bug and attach the full log. > > Thread-283375::DEBUG::2014-12-06 > 21:20:40,219::stompReactor::163::yajsonrpc.StompServer::(send) Sending > response You are using jsonrpc - please check if switching to xmlrpc solve your issue. > Thread-283376::DEBUG::2014-12-06 > 21:20:40,252::lvm::288::Storage.Misc.excCmd::(cmd) SUCCESS: = ' > WARNING: lvmetad is running but disabled. Restart lvmetad before enabling > it!\n'; = 0 > Thread-283376::DEBUG::2014-12-06 > 21:20:40,253::lvm::454::Storage.LVM::(_reloadlvs) lvs reloaded > Thread-283376::DEBUG::2014-12-06 > 21:20:40,254::lvm::454::Storage.OperationMutex::(_reloadlvs) Operation 'lvm > reload operation' released the operation mutex > Thread-283376::WARNING::2014-12-06 > 21:20:40,254::lvm::600::Storage.LVM::(getLv) lv: > fb8466c9-0867-4e73-8362-2c95eea89a83 not found in lvs vg: > 9d53ecef-8bfc-470b-8867-836bfa7df137 response > Thread-283376::ERROR::2014-12-06 > 21:20:40,254::task::866::Storage.TaskManager.Task::(_setError) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Unexpected error > Traceback (most recent call last): > File "/usr/share/vdsm/storage/task.py", line 873, in _run > return fn(*args, **kargs) > File "/usr/share/vdsm/logUtils.py", line 45, in wrapper > res = f(*args, **kwargs) > File "/usr/share/vdsm/storage/hsm.py", line 3099, in getVolumeSize > apparentsize = str(dom.getVSize(imgUUID, volUUID)) > File "/usr/share/vdsm/storage/blockSD.py", line 622, in getVSize > size = lvm.getLV(self.sdUUID, volUUID).size > File "/usr/share/vdsm/storage/lvm.py", line 893, in getLV > raise se.LogicalVolumeDoesNotExistError("%s/%s" % (vgName, lvName)) > LogicalVolumeDoesNotExistError: Logical volume does not exist: > (u'9d53ecef-8bfc-470b-8867-836bfa7df137/fb8466c9-0867-4e73-8362-2c95eea89a83',) > Thread-283376::DEBUG::2014-12-06 > 21:20:40,255::task::885::Storage.TaskManager.Task::(_run) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._run: > cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd > (u'9d53ecef-8bfc-470b-8867-836bfa7df137', > u'0002-0002-0002-0002-010b', > u'7deace0a-0c83-41c8-9122-84079ad949c2', > u'fb8466c9-0867-4e73-8362-2c95eea89a83') {} failed - stopping task > Thread-283376::DEBUG::2014-12-06 > 21:20:40,255::task::1217::Storage.TaskManager.Task::(stop) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::stopping in state preparing > (force False) > Thread-283376::DEBUG::2014-12-06 > 21:20:40,255::task::993::Storage.TaskManager.Task::(_decref) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 1 aborting True > Thread-283376::INFO::2014-12-06 > 21:20:40,255::task::1171::Storage.TaskManager.Task::(prepare) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::aborting: Task is aborted: > 'Logical volume does not exist' - code 610 > Thread-283376::DEBUG::2014-12-06 > 21:20:40,255::task::1176::Storage.TaskManager.Task::(prepare) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Prepare: aborted: Logical > volume does not exist > Thread-283376::DEBUG::2014-12-06 > 21:20:40,256::task::993::Storage.TaskManager.Task::(_decref) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::ref 0 aborting True > Thread-283376::DEBUG::2014-12-06 > 21:20:40,256::task::928::Storage.TaskManager.Task::(_doAbort) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::Task._doAbort: force False > Thread-283376::DEBUG::2014-12-06 > 21:20:40,256::resourceManager::977::Storage.ResourceManager.Owner::(cancelAll) > Owner.cancelAll requests {} > Thread-283376::DEBUG::2014-12-06 > 21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving from state preparing -> > state aborting > Thread-283376::DEBUG::2014-12-06 > 21:20:40,256::task::550::Storage.TaskManager.Task::(__state_aborting) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::_aborting: recover policy none > Thread-283376::DEBUG::2014-12-06 > 21:20:40,256::task::595::Storage.TaskManager.Task::(_updateState) > Task=`cb86d3c3-77f7-46c8-aec0-4c848f1eb2cd`::moving from state aborting -> > state failed > Thread-283376::DEBUG::2014-12-06 > 21:20:40,257::resourceManager::940::Storage.ResourceManager.Owner::(releaseAll) > Owner.releaseAll requests {} resources {} > Thread-283376
[ovirt-users] Don't start vm
Hi, Please Help I normal stop my virtual mashine. But not start ! in the logs 2014-12-05 09:38:06,437 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-87) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM Cent is down with error. Exit message: ('Failed to get size for volume %s', u'fb8466c9-0867-4e73-8362-2c95eea89a83'). 2014-12-05 09:38:06,439 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) Running on vds during rerun failed vm: null 2014-12-05 09:38:06,447 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) VM Cent (d1ccb04d-bda8-42a2-bab6-7def2f8b2a00) is running in db and not running in VDS x3550m2down 2014-12-05 09:38:06,475 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) Rerun vm d1ccb04d-bda8-42a2-bab6-7def2f8b2a00. Called from vds x3550m2down 2014-12-05 09:38:06,482 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID: 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM Cent on Host x3550m2down 2014-12-05 09:38:06,486 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) Lock Acquired to object EngineLock [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM , sharedLocks= ] 2014-12-05 09:38:06,504 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-16) START, IsVmDuringInitiatingVDSCommand( vmId = d1ccb04d-bda8-42a2-bab6-7def2f8b2a00), log id: 2e257f81 2014-12-05 09:38:06,505 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-16) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2e257f81 2014-12-05 09:38:06,509 WARN [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2014-12-05 09:38:06,510 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) Lock freed to object EngineLock [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM , sharedLocks= ] 2014-12-05 09:38:06,539 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID: 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM Cent (User: admin). 2014-12-05 09:38:06,548 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (org.ovirt.thread.pool-8-thread-27) [58fe3e35] Running command: ProcessDownVmCommand internal: true. What me do? Roman ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Don't start vm
ef2f8b2a00`::_ongoingCreations released Thread-283376::ERROR::2014-12-06 21:20:40,257::vm::2326::vm.Vm::(_startUnderlyingVm) vmId=`d1ccb04d-bda8-42a2-bab6-7def2f8b2a00`::The vm start process failed Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 2266, in _startUnderlyingVm self._run() File "/usr/share/vdsm/virt/vm.py", line 3301, in _run devices = self.buildConfDevices() File "/usr/share/vdsm/virt/vm.py", line 2063, in buildConfDevices self._normalizeVdsmImg(drv) File "/usr/share/vdsm/virt/vm.py", line 1986, in _normalizeVdsmImg drv['volumeID']) StorageUnavailableError: ('Failed to get size for volume %s', u'fb8466c9-0867-4e73-8362-2c95eea89a83') Thread-283376::DEBUG::2014-12-06 21:20:40,260::vm::2838::vm.Vm::(setDownStatus) vmId=`d1ccb04d-bda8-42a2-bab6-7def2f8b2a00`::Changed state to Down: ('Failed to get size for volume %s', u'fb8466c9-0867-4e73-8362-2c95eea89a83') (code=1) JsonRpc (StompReactor)::DEBUG::2014-12-06 21:20:41,089::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message JsonRpcServer::DEBUG::2014-12-06 21:20:41,091::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-283378::DEBUG::2014-12-06 21:20:41,097::stompReactor::163::yajsonrpc.StompServer::(send) Sending response JsonRpc (StompReactor)::DEBUG::2014-12-06 21:20:41,101::stompReactor::98::Broker.StompAdapter::(handle_frame) Handling message JsonRpcServer::DEBUG::2014-12-06 21:20:41,103::__init__::504::jsonrpc.JsonRpcServer::(serve_requests) Waiting for request Thread-283379::DEBUG::2014-12-06 21:20:41,108::vm::486::vm.Vm::(_getUserCpuTuneInfo) vmId=`c66e3966-a190-4cb1-8677-3d49d29cedc9`::Domain Metadata is not set Thread-283379::DEBUG::2014-12-06 21:20:41,110::stompReactor::163::yajsonrpc.StompServer::(send) Sending response Douglas Schilling Landgraf написано 06.12.2014 03:02:33: > От: Douglas Schilling Landgraf > Кому: users@ovirt.org, > Копия: drova...@kaluga-gov.ru, Dan Kenigsberg > Дата: 05.12.2014 23:58 > Тема: Re: [ovirt-users] Don't start vm > > On 12/05/2014 02:55 PM, Roman Nikolayevich Drovalev wrote: > > Hi, > > Please Help > > > > I normal stop my virtual mashine. But not start ! > > > > in the logs > > > > 2014-12-05 09:38:06,437 ERROR > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (DefaultQuartzScheduler_Worker-87) Correlation ID: null, Call Stack: > > null, Custom Event ID: -1, Message: VM Cent is down with error. Exit > > message: ('Failed to get size for volume %s', > > u'fb8466c9-0867-4e73-8362-2c95eea89a83'). > > 2014-12-05 09:38:06,439 INFO > > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > > (DefaultQuartzScheduler_Worker-87) Running on vds during rerun failed > > vm: null > > 2014-12-05 09:38:06,447 INFO > > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > > (DefaultQuartzScheduler_Worker-87) VM Cent > > (d1ccb04d-bda8-42a2-bab6-7def2f8b2a00) is running in db and not running > > in VDS x3550m2down > > 2014-12-05 09:38:06,475 ERROR > > [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] > > (DefaultQuartzScheduler_Worker-87) Rerun vm > > d1ccb04d-bda8-42a2-bab6-7def2f8b2a00. Called from vds x3550m2down > > 2014-12-05 09:38:06,482 WARN > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID: > > 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event ID: > > -1, Message: Failed to run VM Cent on Host x3550m2down > > 2014-12-05 09:38:06,486 INFO [org.ovirt.engine.core.bll.RunVmCommand] > > (org.ovirt.thread.pool-8-thread-16) Lock Acquired to object EngineLock > > [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM > > , sharedLocks= ] > > 2014-12-05 09:38:06,504 INFO > > [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] > > (org.ovirt.thread.pool-8-thread-16) START, > > IsVmDuringInitiatingVDSCommand( vmId = > > d1ccb04d-bda8-42a2-bab6-7def2f8b2a00), log id: 2e257f81 > > 2014-12-05 09:38:06,505 INFO > > [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] > > (org.ovirt.thread.pool-8-thread-16) FINISH, > > IsVmDuringInitiatingVDSCommand, return: false, log id: 2e257f81 > > 2014-12-05 09:38:06,509 WARN [org.ovirt.engine.core.bll.RunVmCommand] > > (org.ovirt.thread.pool-8-thread-16) CanDoAction of action RunVm failed. > > > Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__
Re: [ovirt-users] Don't start vm
On 12/05/2014 02:55 PM, Roman Nikolayevich Drovalev wrote: Hi, Please Help I normal stop my virtual mashine. But not start ! in the logs 2014-12-05 09:38:06,437 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-87) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM Cent is down with error. Exit message: ('Failed to get size for volume %s', u'fb8466c9-0867-4e73-8362-2c95eea89a83'). 2014-12-05 09:38:06,439 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) Running on vds during rerun failed vm: null 2014-12-05 09:38:06,447 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) VM Cent (d1ccb04d-bda8-42a2-bab6-7def2f8b2a00) is running in db and not running in VDS x3550m2down 2014-12-05 09:38:06,475 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) Rerun vm d1ccb04d-bda8-42a2-bab6-7def2f8b2a00. Called from vds x3550m2down 2014-12-05 09:38:06,482 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID: 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM Cent on Host x3550m2down 2014-12-05 09:38:06,486 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) Lock Acquired to object EngineLock [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM , sharedLocks= ] 2014-12-05 09:38:06,504 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-16) START, IsVmDuringInitiatingVDSCommand( vmId = d1ccb04d-bda8-42a2-bab6-7def2f8b2a00), log id: 2e257f81 2014-12-05 09:38:06,505 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-16) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2e257f81 2014-12-05 09:38:06,509 WARN [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2014-12-05 09:38:06,510 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) Lock freed to object EngineLock [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM , sharedLocks= ] 2014-12-05 09:38:06,539 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID: 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM Cent (User: admin). 2014-12-05 09:38:06,548 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (org.ovirt.thread.pool-8-thread-27) [58fe3e35] Running command: ProcessDownVmCommand internal: true. What me do? Hi Roman, Could you please share from hypervisor the /var/log/vdsm/vdsm.log too? -- Cheers Douglas ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Don't start vm
Hi, Please Help I normal stop my virtual mashine. But not start ! in the logs 2014-12-05 09:38:06,437 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler_Worker-87) Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: VM Cent is down with error. Exit message: ('Failed to get size for volume %s', u'fb8466c9-0867-4e73-8362-2c95eea89a83'). 2014-12-05 09:38:06,439 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) Running on vds during rerun failed vm: null 2014-12-05 09:38:06,447 INFO [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) VM Cent (d1ccb04d-bda8-42a2-bab6-7def2f8b2a00) is running in db and not running in VDS x3550m2down 2014-12-05 09:38:06,475 ERROR [org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo] (DefaultQuartzScheduler_Worker-87) Rerun vm d1ccb04d-bda8-42a2-bab6-7def2f8b2a00. Called from vds x3550m2down 2014-12-05 09:38:06,482 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID: 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM Cent on Host x3550m2down 2014-12-05 09:38:06,486 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) Lock Acquired to object EngineLock [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM , sharedLocks= ] 2014-12-05 09:38:06,504 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-16) START, IsVmDuringInitiatingVDSCommand( vmId = d1ccb04d-bda8-42a2-bab6-7def2f8b2a00), log id: 2e257f81 2014-12-05 09:38:06,505 INFO [org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand] (org.ovirt.thread.pool-8-thread-16) FINISH, IsVmDuringInitiatingVDSCommand, return: false, log id: 2e257f81 2014-12-05 09:38:06,509 WARN [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) CanDoAction of action RunVm failed. Reasons:VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,VAR__ACTION__RUN,VAR__TYPE__VM,SCHEDULING_NO_HOSTS 2014-12-05 09:38:06,510 INFO [org.ovirt.engine.core.bll.RunVmCommand] (org.ovirt.thread.pool-8-thread-16) Lock freed to object EngineLock [exclusiveLocks= key: d1ccb04d-bda8-42a2-bab6-7def2f8b2a00 value: VM , sharedLocks= ] 2014-12-05 09:38:06,539 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-8-thread-16) Correlation ID: 2f3d1469, Job ID: 86d62fc3-f2d3-48f1-a5b3-d2abd0f84d6c, Call Stack: null, Custom Event ID: -1, Message: Failed to run VM Cent (User: admin). 2014-12-05 09:38:06,548 INFO [org.ovirt.engine.core.bll.ProcessDownVmCommand] (org.ovirt.thread.pool-8-thread-27) [58fe3e35] Running command: ProcessDownVmCommand internal: true. What me do? Roman___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users