[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Strahil Nikolov
 It seems to be working properly , but the OVF got updated recently and 
powering up the hosted-engine is not working :)
[root@ovirt2 ~]# sudo -u vdsm tar -tvf  
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/441abdc8-6cb1-49a4-903f-a1ec0ed88429/c3309fc0-8707-4de1-903d-8d4bbb024f81
-rw-r--r-- 0/0 138 2019-03-12 08:06 info.json
-rw-r--r-- 0/0   21164 2019-03-12 08:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-12 08:06 metadata.json

[root@ovirt2 ~]# sudo -u vdsm tar -tvf 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
-rw-r--r-- 0/0 138 2019-03-13 11:06 info.json
-rw-r--r-- 0/0   21164 2019-03-13 11:06 
8474ae07-f172-4a20-b516-375c73903df7.ovf
-rw-r--r-- 0/0  72 2019-03-13 11:06 metadata.json

Best Regards,Strahil Nikolov

В сряда, 13 март 2019 г., 11:08:57 ч. Гринуич+2, Simone Tiraboschi 
 написа:  
 
 

On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov  wrote:

Hi Simone,Nir,

>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain about 
>the same time.
I have tested again (first wiped current transfers) and it is happening the 
same (phase 10).
engine=# \x
Expanded display is on.
engine=# select * from image_transfers;
-[ RECORD 1 ]-+-
command_id    | 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
command_type  | 1024
phase | 10
last_updated  | 2019-03-13 09:38:30.365+02
message   |
vds_id    |
disk_id   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
imaged_ticket_id  |
proxy_uri |
signed_ticket |
bytes_sent    | 0
bytes_total   | 134217728
type  | 1
active    | f
daemon_uri    |
client_inactivity_timeout | 60

engine=# delete from image_transfers where 
disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';

This is the VDSM log from the last test:

2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START 
prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74', 
spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8', 
imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0', 
leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
 (fileSD:623)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
domain run directory 
u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
directory: /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: 
None (fileUtils:199)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
symlink from 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 to 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 (fileSD:580)
2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212, in 
prepareImage
    leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in 
produceVolume
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py", line 
45, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 801, in 
__init__
    self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 71, 
in __init__
    volUUID)
  File "/usr/lib/python2.7

[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Simone Tiraboschi
On Wed, Mar 13, 2019 at 9:57 AM Strahil Nikolov 
wrote:

> Hi Simone,Nir,
>
> >Adding also Nir on this, the whole sequence is tracked here:
> >I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain
> about the same time.
>
> I have tested again (first wiped current transfers) and it is happening
> the same (phase 10).
>
> engine=# \x
> Expanded display is on.
> engine=# select * from image_transfers;
> -[ RECORD 1 ]-+-
> command_id| 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
> command_type  | 1024
> phase | 10
> last_updated  | 2019-03-13 09:38:30.365+02
> message   |
> vds_id|
> disk_id   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
> imaged_ticket_id  |
> proxy_uri |
> signed_ticket |
> bytes_sent| 0
> bytes_total   | 134217728
> type  | 1
> active| f
> daemon_uri|
> client_inactivity_timeout | 60
>
> engine=# delete from image_transfers where
> disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';
>
> This is the VDSM log from the last test:
>
>
> 2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START
> prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74',
> spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8',
> imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0',
> leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True)
> from=:::192.168.1.2,42644,
> flow_id=d48d9272-2e65-438d-a7b2-46979309833b,
> task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
> 2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain]
> Fixing permissions on
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
> (fileSD:623)
> 2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain]
> Creating domain run directory
> u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
> 2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils]
> Creating directory:
> /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: None
> (fileUtils:199)
> 2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain]
> Creating symlink from
> /rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
> to
> /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0
> (fileSD:580)
> 2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH
> prepareImage error=Volume does not exist:
> (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) from=:::192.168.1.2,42644,
> flow_id=d48d9272-2e65-438d-a7b2-46979309833b,
> task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
> 2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task]
> (Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882,
> in _run
> return fn(*args, **kargs)
>   File "", line 2, in prepareImage
>   File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in
> method
> ret = func(*args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212,
> in prepareImage
> leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in
> produceVolume
> volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py",
> line 45, in __init__
> volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 801, in __init__
> self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
> 71, in __init__
> volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86,
> in __init__
> self.validate()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line
> 112, in validate
> self.validateVolumePath()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
> 136, in validateVolumePath
> self.validateMetaVolumePath()
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line
> 118, in validateMetaVolumePath
> raise se.VolumeDoesNotExist(self.volUUID)
> VolumeDoesNotExist: Volume does not exist:
> (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)
> 2019-03-13 09:38:23,261+0200 INFO  (jsonrpc/4) [storage.TaskManager.Task]
> (Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') aborting: Task is aborted:
> "Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)" - code
> 201 (task:1181)
> 2019-03-13 09

[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-13 Thread Strahil Nikolov
Hi Simone,Nir,

>Adding also Nir on this, the whole sequence is tracked here:
>I'd suggest to check ovirt-imageio and vdsm logs on ovirt2.localdomain about 
>the same time.
I have tested again (first wiped current transfers) and it is happening the 
same (phase 10).
engine=# \x
Expanded display is on.
engine=# select * from image_transfers;
-[ RECORD 1 ]-+-
command_id    | 11b2c162-29e0-46ef-b0a4-f41ebe3c2910
command_type  | 1024
phase | 10
last_updated  | 2019-03-13 09:38:30.365+02
message   |
vds_id    |
disk_id   | 94ade632-6ecc-4901-8cec-8e39f3d69cb0
imaged_ticket_id  |
proxy_uri |
signed_ticket |
bytes_sent    | 0
bytes_total   | 134217728
type  | 1
active    | f
daemon_uri    |
client_inactivity_timeout | 60

engine=# delete from image_transfers where 
disk_id='94ade632-6ecc-4901-8cec-8e39f3d69cb0';

This is the VDSM log from the last test:

2019-03-13 09:38:23,229+0200 INFO  (jsonrpc/4) [vdsm.api] START 
prepareImage(sdUUID=u'808423f9-8a5c-40cd-bc9f-2568c85b8c74', 
spUUID=u'b803f7e4-2543-11e9-ba9a-00163e6272c8', 
imgUUID=u'94ade632-6ecc-4901-8cec-8e39f3d69cb0', 
leafUUID=u'9460fc4b-54f3-48e3-b7b6-da962321ecf4', allowIllegal=True) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:48)
2019-03-13 09:38:23,253+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Fixing 
permissions on 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0/9460fc4b-54f3-48e3-b7b6-da962321ecf4
 (fileSD:623)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
domain run directory 
u'/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74' (fileSD:577)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
directory: /var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74 mode: 
None (fileUtils:199)
2019-03-13 09:38:23,254+0200 INFO  (jsonrpc/4) [storage.StorageDomain] Creating 
symlink from 
/rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 to 
/var/run/vdsm/storage/808423f9-8a5c-40cd-bc9f-2568c85b8c74/94ade632-6ecc-4901-8cec-8e39f3d69cb0
 (fileSD:580)
2019-03-13 09:38:23,260+0200 INFO  (jsonrpc/4) [vdsm.api] FINISH prepareImage 
error=Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) 
from=:::192.168.1.2,42644, flow_id=d48d9272-2e65-438d-a7b2-46979309833b, 
task_id=bb534320-451c-45c0-b7a6-0cce017ec7cb (api:52)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.TaskManager.Task] 
(Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') Unexpected error (task:875)
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in 
_run
    return fn(*args, **kargs)
  File "", line 2, in prepareImage
  File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
    ret = func(*args, **kwargs)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 3212, in 
prepareImage
    leafInfo = dom.produceVolume(imgUUID, leafUUID).getVmVolumeInfo()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 822, in 
produceVolume
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/glusterVolume.py", line 
45, in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 801, in 
__init__
    self._manifest = self.manifestClass(repoPath, sdUUID, imgUUID, volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 71, 
in __init__
    volUUID)
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 86, in 
__init__
    self.validate()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 112, in 
validate
    self.validateVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 136, 
in validateVolumePath
    self.validateMetaVolumePath()
  File "/usr/lib/python2.7/site-packages/vdsm/storage/fileVolume.py", line 118, 
in validateMetaVolumePath
    raise se.VolumeDoesNotExist(self.volUUID)
VolumeDoesNotExist: Volume does not exist: 
(u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)
2019-03-13 09:38:23,261+0200 INFO  (jsonrpc/4) [storage.TaskManager.Task] 
(Task='bb534320-451c-45c0-b7a6-0cce017ec7cb') aborting: Task is aborted: 
"Volume does not exist: (u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',)" - code 201 
(task:1181)
2019-03-13 09:38:23,261+0200 ERROR (jsonrpc/4) [storage.Dispatcher] FINISH 
prepareImage error=Volume does not exist: 
(u'9460fc4b-54f3-48e3-b7b6-da962321ecf4',) (dispatcher:83)

Yet, the volume is there and is accessible:
[root@ovirt1 94ade632-6

[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-12 Thread Simone Tiraboschi
On Tue, Mar 12, 2019 at 3:53 PM Strahil Nikolov 
wrote:

> Dear Simone,
>
> first I want to thank you for your help.
>
> I'm attaching the engine log from that time ,as I the output provided will
> not be enough to detect any issues.
>

Adding also Nir on this, the whole sequence is tracked here:
2019-03-11 18:15:07,202+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Lock Acquired to
object 'EngineLock:{exclusiveLocks='',
sharedLocks='[441abdc8-6cb1-49a4-903f-a1ec0ed88429=DISK]'}'
2019-03-11 18:15:07,550+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Running command:
TransferDiskImageCommand internal: false. Entities affected :  ID:
441abdc8-6cb1-49a4-903f-a1ec0ed88429 Type: DiskAction group
EDIT_DISK_PROPERTIES with role type USER
2019-03-11 18:15:07,550+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Creating
ImageTransfer entity for command '485b99d4-75d9-45bb-acf5-2ac050fbf115'
2019-03-11 18:15:08,295+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Successfully added
Download disk 'OVF_STORE' (disk id: '441abdc8-6cb1-49a4-903f-a1ec0ed88429',
image id: 'c3309fc0-8707-4de1-903d-8d4bbb024f81') for image transfer
command '485b99d4-75d9-45bb-acf5-2ac050fbf115'
2019-03-11 18:15:08,826+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (default
task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] START,
PrepareImageVDSCommand(HostName = ovirt2.localdomain,
PrepareImageVDSCommandParameters:{hostId='a06f6597-323b-4f60-83a9-699cf5f768eb'}),
log id: 6e769a9c
2019-03-11 18:15:09,048+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.PrepareImageVDSCommand] (default
task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] FINISH,
PrepareImageVDSCommand, return: PrepareImageReturn:{status='Status [code=0,
message=Done]'}, log id: 6e769a9c
2019-03-11 18:15:09,049+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AddImageTicketVDSCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] START,
AddImageTicketVDSCommand(HostName = ovirt2.localdomain,
AddImageTicketVDSCommandParameters:{hostId='a06f6597-323b-4f60-83a9-699cf5f768eb',
ticketId='455b196c-1b59-44cd-9168-549e6c6732ab', timeout='300',
operations='[read]', size='134217728',
url='file:///rhev/data-center/mnt/glusterSD/ovirt1.localdomain:_engine/808423f9-8a5c-40cd-bc9f-2568c85b8c74/images/441abdc8-6cb1-49a4-903f-a1ec0ed88429/c3309fc0-8707-4de1-903d-8d4bbb024f81',
filename='OVF_STORE.raw', sparse='false',
transferId='485b99d4-75d9-45bb-acf5-2ac050fbf115'}), log id: 71e97bc0
2019-03-11 18:15:09,057+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.AddImageTicketVDSCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] FINISH,
AddImageTicketVDSCommand, return: StatusOnlyReturn [status=Status [code=0,
message=Done]], log id: 71e97bc0
2019-03-11 18:15:09,057+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Started transfer
session with ticket id 455b196c-1b59-44cd-9168-549e6c6732ab, timeout 300
seconds
2019-03-11 18:15:09,057+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Adding image ticket
to ovirt-imageio-proxy, id 455b196c-1b59-44cd-9168-549e6c6732ab
2019-03-11 18:15:09,367+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Updating image
transfer 485b99d4-75d9-45bb-acf5-2ac050fbf115 (image
441abdc8-6cb1-49a4-903f-a1ec0ed88429) phase to Transferring
2019-03-11 18:15:09,506+02 INFO
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] Returning from
proceedCommandExecution after starting transfer session for image transfer
command '485b99d4-75d9-45bb-acf5-2ac050fbf115'
2019-03-11 18:15:09,950+02 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(default task-9) [434241fa-037c-44d2-8f83-0a69baa027e4] EVENT_ID:
TRANSFER_IMAGE_INITIATED(1,031), Image Download with disk OVF_STORE was
initiated by admin@internal-authz.
2019-03-11 18:15:10,243+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetImageTicketVDSCommand]
(EE-ManagedThreadFactory-engineScheduled-Thread-70)
[434241fa-037c-44d2-8f83-0a69baa027e4] START,
GetImageTicketVDSCommand(HostName = ovirt2.localdomain,
GetImageTicketVDSCommandParameters:{hostId='a06f6597-323b-4f60-83a9-699cf5f768eb',
ticketId='455b196c-1b59-44cd-9168-549e6c6732ab', timeout='null'}), log id:
13d2369a
2019-03-11 18:15:10,249+02 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetImageTicketVDSCommand]
(EE-ManagedThre

[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-12 Thread Strahil Nikolov

>Can you please check engine.log and vdsm.log to try understanding why the 
>upload of the >OVF_STORE content is failing on your environment?>I fear it 
>could cause other issues in the future.The failure for 
>94ade632-6ecc-4901-8cec-8e39f3d69cb0 is because I didn't click the save button 
>(google chrome os has issues with ovirt and this will be avoided):
2019-03-10 05:11:33,066+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-77) 
[b25001bb-dc7f-4ff0-9223-e63b6f3
8c5e2] Transfer failed. Download disk 'OVF_STORE' (disk id: 
'94ade632-6ecc-4901-8cec-8e39f3d69cb0', image id: 
'9460fc4b-54f3-48e3-b7b6-da962321ecf4')
2019-03-10 05:11:34,215+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Updating image transfer 
2773daf9-5920-404d-8a5b-6f04e431a9aa (image 
94ade632-6ecc-4901-8cec-8e39f3d69cb0) phase to Finished Failure
2019-03-10 05:11:34,329+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Failed to transfer disk 
'----' (command id 
'2773daf9-5920-404d-8a5b-6f04e431a9aa')
2019-03-10 05:11:34,331+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Ending command 
'org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand' 
successfully.
2019-03-10 05:11:34,331+02 INFO  
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] Lock freed to object 
'EngineLock:{exclusiveLocks='', 
sharedLocks='[94ade632-6ecc-4901-8cec-8e39f3d69cb0=DISK]'}'
2019-03-10 05:11:34,705+02 INFO  
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] 
(EE-ManagedThreadFactory-engineScheduled-Thread-69) 
[b25001bb-dc7f-4ff0-9223-e63b6f38c5e2] EVENT_ID: 
TRANSFER_IMAGE_CANCELLED(1,033), Image Download with disk OVF_STORE was 
cancelled.



And the failure for 441abdc8-6cb1-49a4-903f-a1ec0ed88429 happened on Fedora28 
and the logs report:
2019-03-11 18:16:33,251+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-7) 
[434241fa-037c-44d2-8f83-0a69baa027e4] Finalizing failed transfer. Download 
disk 'OVF_STORE' (disk id: '441abdc8-6cb1-49a4-903f-a1ec0ed88429', image id: 
'c3309fc0-8707-4de1-903d-8d4bbb024f81')
2019-03-11 18:16:43,704+02 ERROR 
[org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] 
(EE-ManagedThreadFactory-engineScheduled-Thread-74) 
[434241fa-037c-44d2-8f83-0a69baa027e4] Transfer failed. Download disk 
'OVF_STORE' (disk id: '441abdc8-6cb1-49a4-903f-a1ec0ed88429', image id: 
'c3309fc0-8707-4de1-903d-8d4bbb024f81')
2019-03-11 18:16:44,929+02 WARN  
[org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
(EE-ManagedThreadFactory-engineScheduled-Thread-72) 
[434241fa-037c-44d2-8f83-0a69baa027e4] Trying to release a shared lock for key: 
'441abdc8-6cb1-49a4-903f-a1ec0ed88429DISK' , but lock does not exist


For the Cancelled event - I think it shouldn't go into this "Failed" state as 
the user has cancelled the action.For the second - i have no explanation.
Now comes the question - what should be done in order to fix that.
Best Regards,Strahil Nikolov

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVK6Z7VGB4T7MJM7WGJYT45ISHY3ZZRK/


[ovirt-users] Re: Ovirt 4.3.1 cannto set host to maintenance

2019-03-12 Thread Simone Tiraboschi
On Tue, Mar 12, 2019 at 10:09 AM Strahil Nikolov 
wrote:

> Hi Community,
>
> I have tried to download my OVF_STORE images that were damaged on the
> shared storage , but it Failed.
> In result , I cannot set any host into maintenance via the UI.I have found
> out this bug 1586126 – After upgrade to RHV 4.2.3, hosts can no longer be
> set into maintenance mode.
>  , but the case is
> different as mine has failed and should not block that operation.
>

Can you please check engine.log and vdsm.log to try understanding why the
upload of the OVF_STORE content is failing on your environment?
I fear it could cause other issues in the future.


>
> 1586126 – After upgrade to RHV 4.2.3, hosts can no longer be set into ma...
>
> 
>
>
>
> Here are 2 screenshots:
> Imgur 
>
>
> Imgur
>
> Imgur
>
> Post with 0 votes and 1 views.
> 
>
> Imgur 
>
>
> Imgur
>
> Imgur
>
> Post with 0 votes and 0 views.
> 
>
>
> How can I recover from that situation?
>
> Best Regards,
> Strahil Nikolov
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/BVCJDVPO63RZWRM2N6RINGP5OHP2L64G/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/L2UZZQKNECS23Z64ZWQZOC3BKFV4A3QU/