[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Matt Snow
Hi Nir, Yedidyah,
for what it's worth I ran through the steps outlined here:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_selinux/troubleshooting-problems-related-to-selinux_using-selinux
and eventually got to running
`setenforce 0` and the issue still persists.


On Tue, Jan 19, 2021 at 6:52 PM Matt Snow  wrote:

> [root@brick ~]# ps -efz | grep sanlock
>
> error: unsupported SysV option
>
>
> Usage:
>
>  ps [options]
>
>
>  Try 'ps --help '
>
>   or 'ps --help '
>
>  for additional help text.
>
>
> For more details see ps(1).
>
> [root@brick ~]# ps -ef | grep sanlock
>
> sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock
> daemon
>
> root13091308  0 10:21 ?00:00:00 /usr/sbin/sanlock
> daemon
>
> root   68086   67674  0 13:38 pts/400:00:00 tail -f sanlock.log
>
> root   73724   68214  0 18:49 pts/500:00:00 grep --color=auto
> sanlock
>
>
> [root@brick ~]# ausearch -m avc
>
> 
>
>
> [root@brick ~]# ls -lhZ
> /rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md
>
> total 278K
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 inbox
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 leases
>
> -rw-rw-r--. 1 vdsm kvm system_u:object_r:nfs_t:s0  342 Jan 19 13:38
> metadata
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 outbox
>
> -rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0 1.3M Jan 19 13:38 xleases
>
> [root@brick ~]#
>
> On Tue, Jan 19, 2021 at 4:13 PM Nir Soffer  wrote:
>
>> On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
>> >
>> >
>> > [root@brick log]# cat sanlock.log
>> >
>> > 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host
>> 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
>> > 2021-01-15 19:17:31 7497 [36293]: s1 lockspace
>> 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
>> > 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission
>> to open
>> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
>>
>> Smells like selinux issue.
>>
>> What do you see in "ausearch -m avc"?
>>
>>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZCEGXZL6OVYV5I5V7QQ4UOL5FMA5WPBJ/


[ovirt-users] Re: Problems Installing host on brand new 4.4 cluster

2021-01-19 Thread David Johnson
Problem solved: there was a misconfiguration in the DNS server.

Regards,
David Johnson
Director of Development, Maxis Technology
844.696.2947 ext 702 (o)  |  479.531.3590 (c)
djohn...@maxistechnology.com


[image: Maxis Techncology] 
www.maxistechnology.com


*stay connected *


On Tue, Jan 19, 2021 at 4:45 PM David Johnson 
wrote:

> Hi all,
>
> I am standing up a brand new cluster on new hardware.
>
> The ovirt controller is installed and appears to be running fine.
>
> When I attempt to add the new host, I get the flag that says "Non
> Operational", and the exclamation mark with the message "Host has no
> default route".
>
> I have confirmed that I have bidirectional ssh connections between the
> controller and the new host.
>
> Selinux is disabled.
>
> Firewall is disabled.
>
> I can get to the host console via the link in the GUI Compute|Hosts page.
>
> This looks like a common symptom of a host of problems, but there is
> nothing to readily indicate what the actual problem is.
>
> In the ovirt-engine log, I found this entry: "Host 'ovirt-host-03' is set
> to Non-Operational, it is missing the following networks: 'ovirtmgmt'"
>
> This seems self-explanatory, but I see no way to add the missing network
> to the host.
>
> Thank you in advance for your help
>
> Regards,
> David Johnson
> * *
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PUJNP5WFOZASXK6DCW3SGUSWHTUM2YQO/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Matt Snow
[root@brick ~]# ps -efz | grep sanlock

error: unsupported SysV option


Usage:

 ps [options]


 Try 'ps --help '

  or 'ps --help '

 for additional help text.


For more details see ps(1).

[root@brick ~]# ps -ef | grep sanlock

sanlock 1308   1  0 10:21 ?00:00:01 /usr/sbin/sanlock daemon

root13091308  0 10:21 ?00:00:00 /usr/sbin/sanlock daemon

root   68086   67674  0 13:38 pts/400:00:00 tail -f sanlock.log

root   73724   68214  0 18:49 pts/500:00:00 grep --color=auto
sanlock


[root@brick ~]# ausearch -m avc




[root@brick ~]# ls -lhZ
/rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/8fd5420f-61fd-41af-8575-f61853a18d91/dom_md

total 278K

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 ids

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 inbox

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s00 Jan 19 13:38 leases

-rw-rw-r--. 1 vdsm kvm system_u:object_r:nfs_t:s0  342 Jan 19 13:38 metadata

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0  16M Jan 19 13:38 outbox

-rw-rw. 1 vdsm kvm system_u:object_r:nfs_t:s0 1.3M Jan 19 13:38 xleases

[root@brick ~]#

On Tue, Jan 19, 2021 at 4:13 PM Nir Soffer  wrote:

> On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
> >
> >
> > [root@brick log]# cat sanlock.log
> >
> > 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host
> 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
> > 2021-01-15 19:17:31 7497 [36293]: s1 lockspace
> 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
> > 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission
> to open
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
>
> Smells like selinux issue.
>
> What do you see in "ausearch -m avc"?
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V6KNRLT5PO4D232P6FWGSCCIVD4G6ENZ/


[ovirt-users] Re: Problems Installing host on brand new 4.4 cluster

2021-01-19 Thread Edward Berger
It seems to be failing on adding the ovirtmgmt bridge to the interface
defined on the host as part of the
host addition installation process.   I had this issue during a
hosted-engine install when ovirtmgmt was on a tagged
port which was already configured with a chosen name not supported by the
ovirt installation scripts.
After I reconfigured the network config with the correct style name, I was
able to install hosted engine on CentOS8.
Check if ovirtmgmt exists on the host you're trying to add to your ovirt
installation, note which interface should
have that network bridge added.

In my case, the network port needed to be configured in the form of eno1.##
where ## was the vlan number, replace en01 with correct ethernet interface.
Similar issues could happen if you're using a bond named something other
than what the installation scripts expect (bond.#).

Its a little confusing, but there are actually two different places to
'edit' a host's config from engine.
The obvious one is 'edit host' but the feature to set networks per device
or configure host devices for passthrough
is under the other (almost hidden) area of the engine web UI.

Under the engine UI, try to set the non-op host into 'maintenance' and then
click on the hostname in compute:hosts page, and then networks tab
and then 'setup host networks' and to try to add it by dragging it from the
right hand side of the page to the correct interface on the left
and saving it.  If you're defining any other logical networks you'll need
to get to that page per host  to set them up.



On Tue, Jan 19, 2021 at 5:46 PM David Johnson 
wrote:

> Hi all,
>
> I am standing up a brand new cluster on new hardware.
>
> The ovirt controller is installed and appears to be running fine.
>
> When I attempt to add the new host, I get the flag that says "Non
> Operational", and the exclamation mark with the message "Host has no
> default route".
>
> I have confirmed that I have bidirectional ssh connections between the
> controller and the new host.
>
> Selinux is disabled.
>
> Firewall is disabled.
>
> I can get to the host console via the link in the GUI Compute|Hosts page.
>
> This looks like a common symptom of a host of problems, but there is
> nothing to readily indicate what the actual problem is.
>
> In the ovirt-engine log, I found this entry: "Host 'ovirt-host-03' is set
> to Non-Operational, it is missing the following networks: 'ovirtmgmt'"
>
> This seems self-explanatory, but I see no way to add the missing network
> to the host.
>
> Thank you in advance for your help
>
> Regards,
> David Johnson
> * *
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUOZ4YSV5ISVEYHI22Y2SPA5PBOH6LXL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/W2PNZB5FOAYJGCRB3Q5J3Y6F4OKZEVJR/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Nir Soffer
On Tue, Jan 19, 2021 at 6:13 PM Matt Snow  wrote:
>
>
> [root@brick log]# cat sanlock.log
>
> 2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host 
> 3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
> 2021-01-15 19:17:31 7497 [36293]: s1 lockspace 
> 54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
> 2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission to 
> open 
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids

Smells like selinux issue.

What do you see in "ausearch -m avc"?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KVYDPG2SIAU7AAHFHKC63MDN2NAINWTI/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Nir Soffer
On Tue, Jan 19, 2021 at 3:43 PM Yedidyah Bar David  wrote:
...
> > 2021-01-18 08:43:25,524-0700 INFO  (jsonrpc/0) [storage.SANLock] 
> > Initializing sanlock for domain 4b3fb9a9-6975-4b80-a2c1-af4e30865088 
> > path=/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/4b3fb9a9-6975-4b80-a2c1-af4e30865088/dom_md/ids
> >  alignment=1048576 block_size=512 io_timeout=10 (clusterlock:286)
> > 2021-01-18 08:43:25,533-0700 ERROR (jsonrpc/0) [storage.SANLock] Cannot 
> > initialize lock for domain 4b3fb9a9-6975-4b80-a2c1-af4e30865088 
> > (clusterlock:305)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 
> > 295, in initLock
> > sector=self._block_size)
> > sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such 
> > device')
> > 2021-01-18 08:43:25,534-0700 INFO  (jsonrpc/0) [vdsm.api] FINISH 
> > createStorageDomain error=Could not initialize cluster lock: () 
> > from=:::192.168.222.53,39612, flow_id=5618fb28, 
> > task_id=49a1bc04-91d0-4d8f-b847-b6461d980495 (api:52)
> > 2021-01-18 08:43:25,534-0700 ERROR (jsonrpc/0) [storage.TaskManager.Task] 
> > (Task='49a1bc04-91d0-4d8f-b847-b6461d980495') Unexpected error (task:880)
> > Traceback (most recent call last):
> >   File "/usr/lib/python3.6/site-packages/vdsm/storage/clusterlock.py", line 
> > 295, in initLock
> > sector=self._block_size)
> > sanlock.SanlockException: (19, 'Sanlock lockspace write failure', 'No such 
> > device')
>
> I think this ^^ is the issue. Can you please check /var/log/sanlock.log?

This is a symptom of inaccessible storage. Sanlock failed to write to
the "ids" file.

We need to understand why sanlock failed. If the partialy created domain is
still available, this may explain the issue:

ls -lhZ /rhv/data-center/mnt/server:_path/storage-domain-uuid/dom_md

ps -efz | grep sanlock

ausearch -m avc

Nir
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QT7I2TWFCTTQDI37W3RIW6AQQZAMIHWB/


[ovirt-users] Problems Installing host on brand new 4.4 cluster

2021-01-19 Thread David Johnson
Hi all,

I am standing up a brand new cluster on new hardware.

The ovirt controller is installed and appears to be running fine.

When I attempt to add the new host, I get the flag that says "Non
Operational", and the exclamation mark with the message "Host has no
default route".

I have confirmed that I have bidirectional ssh connections between the
controller and the new host.

Selinux is disabled.

Firewall is disabled.

I can get to the host console via the link in the GUI Compute|Hosts page.

This looks like a common symptom of a host of problems, but there is
nothing to readily indicate what the actual problem is.

In the ovirt-engine log, I found this entry: "Host 'ovirt-host-03' is set
to Non-Operational, it is missing the following networks: 'ovirtmgmt'"

This seems self-explanatory, but I see no way to add the missing network to
the host.

Thank you in advance for your help

Regards,
David Johnson
* *
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CUOZ4YSV5ISVEYHI22Y2SPA5PBOH6LXL/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Ceph iSCSI gateway should be supported since 4.1, so I think I can use it for 
>configuring the master domain and still leveraging the same overall storage 
>environment provided by Ceph, correct?

yes, it shouldn't be a problem
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/LN6JWSEXX7TTQMWWPUHPFRPTPQQMPUP3/


[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Matt Snow
[root@brick log]# cat sanlock.log

2021-01-15 18:18:48 3974 [36280]: sanlock daemon started 3.8.2 host
3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
2021-01-15 19:17:31 7497 [36293]: s1 lockspace
54532dd4-3e5b-4885-b88e-599c81efb146:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids:0
2021-01-15 19:17:31 7497 [50873]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
2021-01-15 19:17:31 7497 [50873]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-15 19:17:31 7497 [50873]: s1 open_disk
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/54532dd4-3e5b-4885-b88e-599c81efb146/dom_md/ids
error -13
2021-01-15 19:17:32 7498 [36293]: s1 add_lockspace fail result -19
2021-01-18 07:23:58 18 [1318]: sanlock daemon started 3.8.2 host
3b903780-4f79-1018-816e-aeb2724778a7 (brick.co.slakin.net)
2021-01-18 08:43:25 4786 [1359]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/4b3fb9a9-6975-4b80-a2c1-af4e30865088/dom_md/ids
2021-01-18 08:43:25 4786 [1359]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:13:17 6578 [1358]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/d3aec1fd-57cb-4d48-86b9-0a89ae3741a7/dom_md/ids
2021-01-18 09:13:17 6578 [1358]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:19:45 6966 [1359]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/fa5434cf-3e05-45d5-b32e-4948903ee2b4/dom_md/ids
2021-01-18 09:19:45 6966 [1359]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:21:16 7057 [1358]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids
2021-01-18 09:21:16 7057 [1358]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:49:42 8763 [1359]: s1 lockspace
b0f7b773-7e37-4b6b-a467-64230d5f7391:250:/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids:0
2021-01-18 09:49:42 8763 [54250]: open error -13 EACCES: no permission to
open
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids
2021-01-18 09:49:42 8763 [54250]: check that daemon user sanlock 179 group
sanlock 179 has access to disk or file.
2021-01-18 09:49:42 8763 [54250]: s1 open_disk
/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage/b0f7b773-7e37-4b6b-a467-64230d5f7391/dom_md/ids
error -13
2021-01-18 09:49:43 8764 [1359]: s1 add_lockspace fail result -19

[root@brick log]# [root@brick log]# su - sanlock -s /bin/bash

Last login: Tue Jan 19 09:05:25 MST 2021 on pts/2
nodectl must be run as root!
nodectl must be run as root!
[sanlock@brick ~]$ grep sanlock /etc/group
disk:x:6:sanlock
kvm:x:36:qemu,ovirtimg,sanlock
sanlock:x:179:vdsm
qemu:x:107:vdsm,ovirtimg,sanlock
[sanlock@brick ~]$ cd
/rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/ && touch
file.txt && ls -l file.txt
-rw-rw-rw-. 1 sanlock sanlock 0 Jan 19 09:07 file.txt
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$ [root@brick log]# su -
sanlock -s /bin/bash
Last login: Tue Jan 19 09:05:25 MST 2021 on pts/2
nodectl must be run as root!
nodectl must be run as root!
[sanlock@brick ~]$ grep sanlock /etc/group
disk:x:6:sanlock
kvm:x:36:qemu,ovirtimg,sanlock
sanlock:x:179:vdsm
qemu:x:107:vdsm,ovirtimg,sanlock
[sanlock@brick ~]$ cd
/rhev/data-center/mnt/stumpy\:_tanker_ovirt_host__storage/ && touch
file.txt && ls -l file.txt
-rw-rw-rw-. 1 sanlock sanlock 0 Jan 19 09:07 file.txt
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$ ls -ltra
total 2
drwxr-xr-x. 3 vdsmkvm 48 Jan 18 09:48 ..
drwxrwxrwx. 2 vdsmkvm  3 Jan 19 09:07 .
-rw-rw-rw-. 1 sanlock sanlock  0 Jan 19 09:07 file.txt
[sanlock@brick stumpy:_tanker_ovirt_host__storage]$

On Tue, Jan 19, 2021 at 6:44 AM Yedidyah Bar David  wrote:

> On Mon, Jan 18, 2021 at 6:01 PM Matt Snow  wrote:
> >
> > Hi Didi,
> > I did log clean up and am re-running ovirt-hosted-engine-cleanup &&
> ovirt-hosted-engine-setup to get you cleaner log files.
> >
> > searching for host_storage in vdsm.log...
> > **snip**
> > 2021-01-18 08:43:18,842-0700 INFO  (jsonrpc/3) [api.host] FINISH
> getStats return={'status': {'code': 0, 'message': 'Done'}, 'info':
> (suppressed)} from=:::192.168.222.53,39612 (api:54)
> > 2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] START
> getConnectedStoragePoolsList(options=None) from=internal,
> task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:48)
> > 2021-01-18 08:43:19,963-0700 INFO  

[ovirt-users] Re: New setup - Failing to Activate storage domain on NFS shared storage

2021-01-19 Thread Yedidyah Bar David
On Mon, Jan 18, 2021 at 6:01 PM Matt Snow  wrote:
>
> Hi Didi,
> I did log clean up and am re-running ovirt-hosted-engine-cleanup && 
> ovirt-hosted-engine-setup to get you cleaner log files.
>
> searching for host_storage in vdsm.log...
> **snip**
> 2021-01-18 08:43:18,842-0700 INFO  (jsonrpc/3) [api.host] FINISH getStats 
> return={'status': {'code': 0, 'message': 'Done'}, 'info': (suppressed)} 
> from=:::192.168.222.53,39612 (api:54)
> 2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] START 
> getConnectedStoragePoolsList(options=None) from=internal, 
> task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:48)
> 2021-01-18 08:43:19,963-0700 INFO  (vmrecovery) [vdsm.api] FINISH 
> getConnectedStoragePoolsList return={'poollist': []} from=internal, 
> task_id=fb80c883-2447-4ed2-b344-aa0c0fb65809 (api:54)
> 2021-01-18 08:43:19,964-0700 INFO  (vmrecovery) [vds] recovery: waiting for 
> storage pool to go up (clientIF:726)
> 2021-01-18 08:43:20,441-0700 INFO  (jsonrpc/4) [vdsm.api] START 
> connectStorageServer(domType=1, 
> spUUID='----', conList=[{'password': 
> '', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 
> 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 'false', 'id': 
> '----', 'user': '', 'tpgt': '1'}], 
> options=None) from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=032afa50-381a-44af-a067-d25bcc224355 (api:48)
> 2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) 
> [storage.StorageServer.MountConnection] Creating directory 
> '/rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage' (storageServer:167)
> 2021-01-18 08:43:20,446-0700 INFO  (jsonrpc/4) [storage.fileUtils] Creating 
> directory: /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage mode: 
> None (fileUtils:201)
> 2021-01-18 08:43:20,447-0700 INFO  (jsonrpc/4) [storage.Mount] mounting 
> stumpy:/tanker/ovirt/host_storage at 
> /rhev/data-center/mnt/stumpy:_tanker_ovirt_host__storage (mount:207)
> 2021-01-18 08:43:21,271-0700 INFO  (jsonrpc/4) [IOProcessClient] (Global) 
> Starting client (__init__:340)
> 2021-01-18 08:43:21,313-0700 INFO  (ioprocess/51124) [IOProcess] (Global) 
> Starting ioprocess (__init__:465)
> 2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [storage.StorageDomainCache] 
> Invalidating storage domain cache (sdc:74)
> 2021-01-18 08:43:21,373-0700 INFO  (jsonrpc/4) [vdsm.api] FINISH 
> connectStorageServer return={'statuslist': [{'id': 
> '----', 'status': 0}]} 
> from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=032afa50-381a-44af-a067-d25bcc224355 (api:54)
> 2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [vdsm.api] START 
> getStorageDomainsList(spUUID='----', 
> domainClass=1, storageType='', 
> remotePath='stumpy:/tanker/ovirt/host_storage', options=None) 
> from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:48)
> 2021-01-18 08:43:21,497-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
> Refreshing storage domain cache (resize=True) (sdc:80)
> 2021-01-18 08:43:21,498-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
> devices (iscsi:442)
> 2021-01-18 08:43:21,628-0700 INFO  (jsonrpc/5) [storage.ISCSI] Scanning iSCSI 
> devices: 0.13 seconds (utils:390)
> 2021-01-18 08:43:21,629-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
> devices (hba:60)
> 2021-01-18 08:43:21,908-0700 INFO  (jsonrpc/5) [storage.HBA] Scanning FC 
> devices: 0.28 seconds (utils:390)
> 2021-01-18 08:43:21,969-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
> multipath devices (multipath:104)
> 2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.Multipath] Resizing 
> multipath devices: 0.01 seconds (utils:390)
> 2021-01-18 08:43:21,975-0700 INFO  (jsonrpc/5) [storage.StorageDomainCache] 
> Refreshing storage domain cache: 0.48 seconds (utils:390)
> 2021-01-18 08:43:22,167-0700 INFO  (tmap-0/0) [IOProcessClient] 
> (stumpy:_tanker_ovirt_host__storage) Starting client (__init__:340)
> 2021-01-18 08:43:22,204-0700 INFO  (ioprocess/51144) [IOProcess] 
> (stumpy:_tanker_ovirt_host__storage) Starting ioprocess (__init__:465)
> 2021-01-18 08:43:22,208-0700 INFO  (jsonrpc/5) [vdsm.api] FINISH 
> getStorageDomainsList return={'domlist': []} 
> from=:::192.168.222.53,39612, 
> flow_id=2227465c-5040-4199-b1f9-f5305b10b5e5, 
> task_id=e37eb000-13da-440f-9197-07495e53ce52 (api:54)
> 2021-01-18 08:43:22,999-0700 INFO  (jsonrpc/7) [vdsm.api] START 
> connectStorageServer(domType=1, 
> spUUID='----', conList=[{'password': 
> '', 'protocol_version': 'auto', 'port': '', 'iqn': '', 'connection': 
> 'stumpy:/tanker/ovirt/host_storage', 'ipv6_enabled': 'false', 'id': 
> 'bc87e1a4-004e-41b4-b569-9e9413e9c027', 'user': '', 'tpgt': '1'}], 
> options=None) 

[ovirt-users] Re: About Enroll Certificate

2021-01-19 Thread Dana Elfassy
Hi Tommy,
In order to execute 'Enroll Certificate' put the host on maintenance first
Thanks,
Dana

On Tue, Jan 19, 2021 at 1:27 PM tommy  wrote:

> Hi,
>
>
>
> What function of the Enroll Certificate ??
>
>
>
>
>
>
>
>
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NULKSZVRRX6I7EP2OBTBVTAAYCOVR27M/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FELUSYD7I2YE7RKBJ3YWUYKKUFD5QXMA/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 12:20 PM Benny Zlotnik  wrote:

> >Thanks for pointing out the requirement for Master domain. In theory,
> will I be able to satisfy the requirement with another iSCSI or >maybe Ceph
> iSCSI as master domain?
> It should work as ovirt sees it as a regular domain, cephFS will
> probably work too
>

Ceph iSCSI gateway should be supported since 4.1, so I think I can use it
for configuring the master domain and still leveraging the same overall
storage environment provided by Ceph, correct?

https://bugzilla.redhat.com/show_bug.cgi?id=1527061

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ASTNEGXSV7I4NIOG5RVZKDWIPQCEPMU/


[ovirt-users] About Enroll Certificate

2021-01-19 Thread tommy
Hi,

 

What function of the Enroll Certificate ??

 

 

 

 

 



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NULKSZVRRX6I7EP2OBTBVTAAYCOVR27M/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Thanks for pointing out the requirement for Master domain. In theory, will I 
>be able to satisfy the requirement with another iSCSI or >maybe Ceph iSCSI as 
>master domain?
It should work as ovirt sees it as a regular domain, cephFS will
probably work too

>So each node has

>- oVirt Node NG / Centos
>- Ceph cluster member
>- iSCSI or Ceph iSCSI master domain

>How practical is such a setup?
Not sure, it could work, but it hasn't been tested and it's likely you
are going to be the first to try it
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PH6K2B2QMTRZPCRNBHWIV4OZB7X3NLHE/


[ovirt-users] Re: Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"

2021-01-19 Thread Yedidyah Bar David
On Tue, Jan 19, 2021 at 12:59 PM James Freeman  wrote:
>
> So grateful for your help here - I ran tcpdump on the host, and I saw
> the connection requests to the host from the hosted-engine on 54321/tcp,
> so I was kind of getting there on the whole vdsm thing.
>
> The install just fell over again (same issue - the 120 second timeout
> you described). Taking a step back here, I think something is wrong very
> early on in my upgrade process. My environment is:
>
> 2 x RHEL based hosts (previously RHEL 7 - to be re-installed with RHEL 8
> as per install documentation)
> NFS based storage
> Self-hosted engine
>
> I have been following the documentation here:
>
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/upgrade_guide/index#SHE_Upgrading_from_4-3
>
> And specifically here:
>
> https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/upgrade_guide/index#Upgrading_the_Manager_to_4-4_4-3_SHE
>
> All pre-requisite steps are done - the 4.3 engine was upgraded to the
> latest version before the backup was taken and it was shut down.
>
> Now, I note that on my RHEL 8 host (newly installed), vdsmd is not
> configured or running. The deploy script is not opening the firewall for
> the temporary manager to talk to the host on 54321, but it wouldn't
> matter if it did - even if were open up the firewall, there's no
> configured vdsmd running for it to talk to anyway.
>
> I suddenly have the feeling that I've missed an important step that
> would have configured the freshly installed RHEL 8 host for the
> hosted-engine to be installed on - but I can't see what this might be.
> I've been back and forth through the documentation but I can't see where
> vdsmd would have been configured on the host. In short (ignoring all the

This should happen automatically, does not require a manual step on your side.

> failed attempts), my commands to install on a fresh RHEL 8 host have been:
>
> dnf module reset virt
> dnf module list virt
> dnf module enable virt:8.3
> dnf distro-sync --nobest
> dnf install rhvm-appliance
> reboot
> dnf install ovirt-hosted-engine-setup

Just to make sure, perhaps try also 'dnf install ovirt-host'.
If this does carry on additional requirements, perhaps that's a bug
somewhere. But I do not think this is what is failing you.

> dnf install firewalld
> systemctl status firewalld
> systemctl enable firewalld
> systemctl start firewalld

I do not think these are needed - the deploy process should do this.
Should be harmless, though.

> systemctl status firewalld
> hosted-engine --deploy --restore-from-file=backup.bck
>
> Am I missing something fundamental, or is there another step that's not
> working where vdsmd would have been configured?

Sorry, I ignored the fact that it's an upgrade/restore. In this case,
it's expected that the restored engine will not have access to all
other hosts during deploy, until it's started on the external network.
So I suggest to ignore most errors in engine.log and check only those
related to the host you deploy on. And check host-deploy/* logs.

For a general overview of the hosted-engine deploy process, you might
want to check 'Simone Tiraboschi - Hosted Engine 4.3 Deep Dive' in:

https://www.ovirt.org/community/archived_conferences_presentations.html

I think it's still the best presentation slides we have on this.

Good luck,

>
> Many thanks
>
> James
>
> Yedidyah Bar David wrote on 19/01/2021 10:36:
> > On Tue, Jan 19, 2021 at 12:25 PM James Freeman  wrote:
> >> Thanks Didi
> >>
> >> Great pointer - I have just performed a fresh deploy - am in the
> >> hosted-engine VM, and in /var/log/ovirt-engine/engine-log, I can see the
> >> following 3 lines cycling over and over again:
> >>
> >> 2021-01-19 05:12:11,395-05 INFO
> >> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp
> >> Reactor) [] Connecting to rhvh1.example.org/192.168.50.31
> >> 2021-01-19 05:12:11,399-05 ERROR
> >> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> >> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96)
> >> [] Unable to RefreshCapabilities: ConnectException: Connection refused
> >> 2021-01-19 05:12:11,401-05 ERROR
> >> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
> >> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96)
> >> [] Command 'GetCapabilitiesAsyncVDSCommand(HostName = rhvh1.example.org,
> >> VdsIdAndVdsVDSCommandParametersBase:{hostId='12057f7e-a4cf-46ec-b563-c1037ba5c62d',
> >> vds='Host[rhvh1.example.org,12057f7e-a4cf-46ec-b563-c1037ba5c62d]'})'
> >> execution failed: java.net.ConnectException: Connection refused
> >>
> >> I can ping 192.168.50.31 and resolve rhvh1.example.org - however I note
> >> that firewalld on the hypervisor host (192.168.50.31) hasn't had
> >> anything allowed through it yet apart from SSH and Cockpit. Is this a
> >> problem, or a red herring?
> > Generally speaking, the deploy process 

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Sandro Bonazzola
Il giorno mar 19 gen 2021 alle ore 09:07 Gianluca Cecchi <
gianluca.cec...@gmail.com> ha scritto:

> On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik  wrote:
>
>> Ceph support is available via Managed Block Storage (tech preview), it
>> cannot be used instead of gluster for hyperconverged setups.
>>
>>
> Just for clarification: when you say Managed Block Storage you mean
> cinderlib integration, correct?
> Is still this one below the correct reference page for 4.4?
>
> https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
>
> So are the manual steps still needed (and also repo config that seems
> against pike)?
> Or do you have an updated link for configuring cinderlib in 4.4?
>

Above mentioned page was feature development page and not considered end
user documentation.
Updated documentation is here:
https://ovirt.org/documentation/installing_ovirt_as_a_standalone_manager_with_local_databases/#Set_up_Cinderlib




>
> Moreover, it is not possible to use a pure Managed Block Storage setup
>> at all, there has to be at least one regular storage domain in a
>> datacenter
>>
>>
> Is this true only for Self Hosted Engine Environment or also if I have an
> external engine?
>
> Thanks,
> Gianluca
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHSQO6WLMTVDNTVFACLOEFOFOD3GRYLW/
>


-- 

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com


*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MB4FAL34LAJJWVYR247R7T2T6IQE6VP3/


[ovirt-users] Re: Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"

2021-01-19 Thread James Freeman
So grateful for your help here - I ran tcpdump on the host, and I saw 
the connection requests to the host from the hosted-engine on 54321/tcp, 
so I was kind of getting there on the whole vdsm thing.


The install just fell over again (same issue - the 120 second timeout 
you described). Taking a step back here, I think something is wrong very 
early on in my upgrade process. My environment is:


2 x RHEL based hosts (previously RHEL 7 - to be re-installed with RHEL 8 
as per install documentation)

NFS based storage
Self-hosted engine

I have been following the documentation here:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/upgrade_guide/index#SHE_Upgrading_from_4-3

And specifically here:

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.4/html-single/upgrade_guide/index#Upgrading_the_Manager_to_4-4_4-3_SHE

All pre-requisite steps are done - the 4.3 engine was upgraded to the 
latest version before the backup was taken and it was shut down.


Now, I note that on my RHEL 8 host (newly installed), vdsmd is not 
configured or running. The deploy script is not opening the firewall for 
the temporary manager to talk to the host on 54321, but it wouldn't 
matter if it did - even if were open up the firewall, there's no 
configured vdsmd running for it to talk to anyway.


I suddenly have the feeling that I've missed an important step that 
would have configured the freshly installed RHEL 8 host for the 
hosted-engine to be installed on - but I can't see what this might be. 
I've been back and forth through the documentation but I can't see where 
vdsmd would have been configured on the host. In short (ignoring all the 
failed attempts), my commands to install on a fresh RHEL 8 host have been:


dnf module reset virt
dnf module list virt
dnf module enable virt:8.3
dnf distro-sync --nobest
dnf install rhvm-appliance
reboot
dnf install ovirt-hosted-engine-setup
dnf install firewalld
systemctl status firewalld
systemctl enable firewalld
systemctl start firewalld
systemctl status firewalld
hosted-engine --deploy --restore-from-file=backup.bck

Am I missing something fundamental, or is there another step that's not 
working where vdsmd would have been configured?


Many thanks

James

Yedidyah Bar David wrote on 19/01/2021 10:36:

On Tue, Jan 19, 2021 at 12:25 PM James Freeman  wrote:

Thanks Didi

Great pointer - I have just performed a fresh deploy - am in the
hosted-engine VM, and in /var/log/ovirt-engine/engine-log, I can see the
following 3 lines cycling over and over again:

2021-01-19 05:12:11,395-05 INFO
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp
Reactor) [] Connecting to rhvh1.example.org/192.168.50.31
2021-01-19 05:12:11,399-05 ERROR
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96)
[] Unable to RefreshCapabilities: ConnectException: Connection refused
2021-01-19 05:12:11,401-05 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96)
[] Command 'GetCapabilitiesAsyncVDSCommand(HostName = rhvh1.example.org,
VdsIdAndVdsVDSCommandParametersBase:{hostId='12057f7e-a4cf-46ec-b563-c1037ba5c62d',
vds='Host[rhvh1.example.org,12057f7e-a4cf-46ec-b563-c1037ba5c62d]'})'
execution failed: java.net.ConnectException: Connection refused

I can ping 192.168.50.31 and resolve rhvh1.example.org - however I note
that firewalld on the hypervisor host (192.168.50.31) hasn't had
anything allowed through it yet apart from SSH and Cockpit. Is this a
problem, or a red herring?

Generally speaking, the deploy process connects first from the engine to
the host via ssh (22), then (also) configures firewalld to allow access
to vdsm (the oVirt host-side agent, port 54321), and later the engine
normally communicates with the host via vdsm.

Whether or not all of this worked, depends on exactly how you configured
your host's firewalld beforehand.

I suggest to start by not touching it, do the deployment, then see what
it does/did (and that it worked), then decide how you are going to adapt
your policy/tooling/whatever for later deployments, assuming you want to
harden your hosts before deploying.


It seems that the hosted-engine is coming up and being installed and
configured ok. The engine health page looks ok (as validated by
Ansible). It looks like the hosted-engine is waiting for something to
happen on the host itself, but this never completed - which I suspect it
never will given that it cannot connect to the host.

The deploy process runs on the host, connects to the engine, asks it to
add the host, then waits until it sees the host in the engine with status
'Up'. It indeed does not try to further diagnose failures, nor fail more
quickly - if it's 'Up' it's quick, if it's not, it will wait for a timeout
(120 times * 10 seconds = 20 minutes).


Am I on the right track?

[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin


> On 19 Jan 2021, at 13:39, Shantur Rathore  wrote:
> 
> I have tested all options but oVirt seems to tick most required boxes.
> 
> OpenStack : Too complex for use case
> Proxmox : Love Ceph support but very basic clustering support
> OpenNebula : Weird VM state machine.
> 
> Not sure if you know that rbd-nbd support is going to be implemented to 
> Cinderlib. I could understand why oVirt wants to support CinderLib and 
> deprecate Cinder support.

Yes, we love oVirt for “that should work like this”, before oVirt 4.4...
Now imagine: you current cluster runned with qemu-rbd and Cinder, now you 
upgrade oVirt and can’t do anything - can’t migrate, your images in another 
oVirt pool, engine-setup can’t migrate current images to MBS - all in “feature 
preview”, older integration broken, then abandoned.


Thanks,
k___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XZGDUICDWAPGMVQM6V5K4IRZE46PJ3O6/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Shantur Rathore
@Konstantin Shalygin  :
>
> I recommend to look to OpenStack or some OpenNebula/Proxmox if you wan’t
> use Ceph Storage.

I have tested all options but oVirt seems to tick most required boxes.

OpenStack : Too complex for use case
Proxmox : Love Ceph support but very basic clustering support
OpenNebula : Weird VM state machine.

Not sure if you know that rbd-nbd support is going to be implemented to
Cinderlib. I could understand why oVirt wants to support CinderLib and
deprecate Cinder support.

@Strahil Nikolov 

> Most probably it will be easier if you stick with full-blown distro.

Yesterday, I was able to bring up a single host single disk Ceph cluster on
oVirt Node NG 4.4.4 after enabling some repositories. Having said that, I
didn't try image based upgrades to host.
I read somewhere that rpms are persisted between host upgrades in Node NG
now.

@Benny Zlotnik

> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

Thanks for pointing out the requirement for Master domain. In theory, will
I be able to satisfy the requirement with another iSCSI or maybe Ceph iSCSI
as master domain?

So each node has

- oVirt Node NG / Centos
- Ceph cluster member
- iSCSI or Ceph iSCSI master domain

How practical is such a setup?

Thanks,
Shantur

On Tue, Jan 19, 2021 at 9:39 AM Konstantin Shalygin  wrote:

> Yep, BZ is
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1539837
> https://bugzilla.redhat.com/show_bug.cgi?id=1904669
> https://bugzilla.redhat.com/show_bug.cgi?id=1905113
>
> Thanks,
> k
>
> On 19 Jan 2021, at 11:05, Gianluca Cecchi 
> wrote:
>
> perhaps a copy paste error about the bugzilla entries? They are the same
> number...
>
>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JNARS3TLZQH62EISYLYGN4STSKFCBX5F/


[ovirt-users] Re: Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"

2021-01-19 Thread Yedidyah Bar David
On Tue, Jan 19, 2021 at 12:25 PM James Freeman  wrote:
>
> Thanks Didi
>
> Great pointer - I have just performed a fresh deploy - am in the
> hosted-engine VM, and in /var/log/ovirt-engine/engine-log, I can see the
> following 3 lines cycling over and over again:
>
> 2021-01-19 05:12:11,395-05 INFO
> [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp
> Reactor) [] Connecting to rhvh1.example.org/192.168.50.31
> 2021-01-19 05:12:11,399-05 ERROR
> [org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96)
> [] Unable to RefreshCapabilities: ConnectException: Connection refused
> 2021-01-19 05:12:11,401-05 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand]
> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96)
> [] Command 'GetCapabilitiesAsyncVDSCommand(HostName = rhvh1.example.org,
> VdsIdAndVdsVDSCommandParametersBase:{hostId='12057f7e-a4cf-46ec-b563-c1037ba5c62d',
> vds='Host[rhvh1.example.org,12057f7e-a4cf-46ec-b563-c1037ba5c62d]'})'
> execution failed: java.net.ConnectException: Connection refused
>
> I can ping 192.168.50.31 and resolve rhvh1.example.org - however I note
> that firewalld on the hypervisor host (192.168.50.31) hasn't had
> anything allowed through it yet apart from SSH and Cockpit. Is this a
> problem, or a red herring?

Generally speaking, the deploy process connects first from the engine to
the host via ssh (22), then (also) configures firewalld to allow access
to vdsm (the oVirt host-side agent, port 54321), and later the engine
normally communicates with the host via vdsm.

Whether or not all of this worked, depends on exactly how you configured
your host's firewalld beforehand.

I suggest to start by not touching it, do the deployment, then see what
it does/did (and that it worked), then decide how you are going to adapt
your policy/tooling/whatever for later deployments, assuming you want to
harden your hosts before deploying.

>
> It seems that the hosted-engine is coming up and being installed and
> configured ok. The engine health page looks ok (as validated by
> Ansible). It looks like the hosted-engine is waiting for something to
> happen on the host itself, but this never completed - which I suspect it
> never will given that it cannot connect to the host.

The deploy process runs on the host, connects to the engine, asks it to
add the host, then waits until it sees the host in the engine with status
'Up'. It indeed does not try to further diagnose failures, nor fail more
quickly - if it's 'Up' it's quick, if it's not, it will wait for a timeout
(120 times * 10 seconds = 20 minutes).

>
> Am I on the right track?

You are :-).

Good luck and best regards,

>
> Yedidyah Bar David wrote on 19/01/2021 10:06:
> > On Tue, Jan 19, 2021 at 11:44 AM  wrote:
> >> Hi all
> >>
> >> I am in the process of migrating a RHV 4.3 setup to RHV 4.4 and struggling 
> >> with the setup. I am installing on RHEL 8.3, using settings backed up from 
> >> the RHV 4.3 install (via 'hosted-engine --deploy 
> >> --restore-from-file=backup.bck').
> >>
> >> The install process always fails at the same point for me at the moment, 
> >> and I can't figure out how to get past it. As far as install progress 
> >> goes, the local hosted-engine comes up and runs on the node. I have been 
> >> able to grep for local_vm_ip in the logs, and can SSH into it with the 
> >> password I set during the setup phase.
> >>
> >> However the install playbooks always fail with:
> >> 2021-01-18 18:38:00,086-0500 ERROR otopi.plugins.gr_he_common.core.misc 
> >> misc._terminate:167 Hosted Engine deployment failed: please check the logs 
> >> for the issue, fix accordingly or re-deploy from scratch.
> >>
> >> Earlier in the logs, I note the following:
> >> 2021-01-18 18:34:51,258-0500 ERROR 
> >> otopi.ovirt_hosted_engine_setup.ansible_utils 
> >> ansible_utils._process_output:109 fatal: [localhost]: FAILED! => 
> >> {"changed": false, "msg": "Host is not up, please check logs, perhaps also 
> >> on the engine machine"}
> >> 2021-01-18 18:37:16,661-0500 ERROR 
> >> otopi.ovirt_hosted_engine_setup.ansible_utils 
> >> ansible_utils._process_output:109 fatal: [localhost]: FAILED! => 
> >> {"changed": false, "msg": "The system may not be provisioned according to 
> >> the playbook results: please check the logs for the issue, fix accordingly 
> >> or re-deploy from scratch.\n"}
> >> Traceback (most recent call last):
> >>File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in 
> >> _executeMethod
> >>  method['method']()
> >>File 
> >> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
> >>  line 435, in _closeup
> >>  raise RuntimeError(_('Failed executing ansible-playbook'))
> >> RuntimeError: Failed executing ansible-playbook
> >> 2021-01-18 18:37:18,996-0500 ERROR otopi.context 
> >> context._executeMethod:154 Failed 

[ovirt-users] Re: Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"

2021-01-19 Thread James Freeman

Thanks Didi

Great pointer - I have just performed a fresh deploy - am in the 
hosted-engine VM, and in /var/log/ovirt-engine/engine-log, I can see the 
following 3 lines cycling over and over again:


2021-01-19 05:12:11,395-05 INFO 
[org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp 
Reactor) [] Connecting to rhvh1.example.org/192.168.50.31
2021-01-19 05:12:11,399-05 ERROR 
[org.ovirt.engine.core.vdsbroker.monitoring.HostMonitoring] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) 
[] Unable to RefreshCapabilities: ConnectException: Connection refused
2021-01-19 05:12:11,401-05 ERROR 
[org.ovirt.engine.core.vdsbroker.vdsbroker.GetCapabilitiesAsyncVDSCommand] 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-96) 
[] Command 'GetCapabilitiesAsyncVDSCommand(HostName = rhvh1.example.org, 
VdsIdAndVdsVDSCommandParametersBase:{hostId='12057f7e-a4cf-46ec-b563-c1037ba5c62d', 
vds='Host[rhvh1.example.org,12057f7e-a4cf-46ec-b563-c1037ba5c62d]'})' 
execution failed: java.net.ConnectException: Connection refused


I can ping 192.168.50.31 and resolve rhvh1.example.org - however I note 
that firewalld on the hypervisor host (192.168.50.31) hasn't had 
anything allowed through it yet apart from SSH and Cockpit. Is this a 
problem, or a red herring?


It seems that the hosted-engine is coming up and being installed and 
configured ok. The engine health page looks ok (as validated by 
Ansible). It looks like the hosted-engine is waiting for something to 
happen on the host itself, but this never completed - which I suspect it 
never will given that it cannot connect to the host.


Am I on the right track?

Yedidyah Bar David wrote on 19/01/2021 10:06:

On Tue, Jan 19, 2021 at 11:44 AM  wrote:

Hi all

I am in the process of migrating a RHV 4.3 setup to RHV 4.4 and struggling with 
the setup. I am installing on RHEL 8.3, using settings backed up from the RHV 
4.3 install (via 'hosted-engine --deploy --restore-from-file=backup.bck').

The install process always fails at the same point for me at the moment, and I 
can't figure out how to get past it. As far as install progress goes, the local 
hosted-engine comes up and runs on the node. I have been able to grep for 
local_vm_ip in the logs, and can SSH into it with the password I set during the 
setup phase.

However the install playbooks always fail with:
2021-01-18 18:38:00,086-0500 ERROR otopi.plugins.gr_he_common.core.misc 
misc._terminate:167 Hosted Engine deployment failed: please check the logs for 
the issue, fix accordingly or re-deploy from scratch.

Earlier in the logs, I note the following:
2021-01-18 18:34:51,258-0500 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:109 fatal: [localhost]: FAILED! => {"changed": false, 
"msg": "Host is not up, please check logs, perhaps also on the engine machine"}
2021-01-18 18:37:16,661-0500 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:109 fatal: [localhost]: FAILED! => {"changed": false, 
"msg": "The system may not be provisioned according to the playbook results: please check the 
logs for the issue, fix accordingly or re-deploy from scratch.\n"}
Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in 
_executeMethod
 method['method']()
   File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
 line 435, in _closeup
 raise RuntimeError(_('Failed executing ansible-playbook'))
RuntimeError: Failed executing ansible-playbook
2021-01-18 18:37:18,996-0500 ERROR otopi.context context._executeMethod:154 
Failed to execute stage 'Closing up': Failed executing ansible-playbook
2021-01-18 18:37:32,421-0500 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 fatal: [localhost]: 
UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 
rhvm.example.org port 22: No route to host", "skip_reason": "Host localhost is unreachable", "unreachable": 
true}

I find the unreachable message a bit odd, as at this stage all that has 
happened is that the local hosted-engine has been brought up to be configured, 
and so it is running on virbr0, not on my actual network. As a result, that DNS 
address will never resolve, and the IP it resolves to won't be up. I gave the 
installation script permission to modify the local /etc/hosts but this hasn't 
improved things.

I presume I'm missing something in the install process, or earlier on in the 
logs, but I've been scanning for errors and possible clues to no avail.

Any and all help greatly appreciated!

Please check/share, on the engine machine under /var/log/ovirt-engine,
or, if inaccessible, on the host, under
/var/log/ovirt-hosted-engine-setup/engine-logs-*:

engine.log

host-deploy/*

Good luck and best regards,

___
Users mailing list -- 

[ovirt-users] Re: Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"

2021-01-19 Thread Yedidyah Bar David
On Tue, Jan 19, 2021 at 11:44 AM  wrote:
>
> Hi all
>
> I am in the process of migrating a RHV 4.3 setup to RHV 4.4 and struggling 
> with the setup. I am installing on RHEL 8.3, using settings backed up from 
> the RHV 4.3 install (via 'hosted-engine --deploy 
> --restore-from-file=backup.bck').
>
> The install process always fails at the same point for me at the moment, and 
> I can't figure out how to get past it. As far as install progress goes, the 
> local hosted-engine comes up and runs on the node. I have been able to grep 
> for local_vm_ip in the logs, and can SSH into it with the password I set 
> during the setup phase.
>
> However the install playbooks always fail with:
> 2021-01-18 18:38:00,086-0500 ERROR otopi.plugins.gr_he_common.core.misc 
> misc._terminate:167 Hosted Engine deployment failed: please check the logs 
> for the issue, fix accordingly or re-deploy from scratch.
>
> Earlier in the logs, I note the following:
> 2021-01-18 18:34:51,258-0500 ERROR 
> otopi.ovirt_hosted_engine_setup.ansible_utils 
> ansible_utils._process_output:109 fatal: [localhost]: FAILED! => {"changed": 
> false, "msg": "Host is not up, please check logs, perhaps also on the engine 
> machine"}
> 2021-01-18 18:37:16,661-0500 ERROR 
> otopi.ovirt_hosted_engine_setup.ansible_utils 
> ansible_utils._process_output:109 fatal: [localhost]: FAILED! => {"changed": 
> false, "msg": "The system may not be provisioned according to the playbook 
> results: please check the logs for the issue, fix accordingly or re-deploy 
> from scratch.\n"}
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in 
> _executeMethod
> method['method']()
>   File 
> "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
>  line 435, in _closeup
> raise RuntimeError(_('Failed executing ansible-playbook'))
> RuntimeError: Failed executing ansible-playbook
> 2021-01-18 18:37:18,996-0500 ERROR otopi.context context._executeMethod:154 
> Failed to execute stage 'Closing up': Failed executing ansible-playbook
> 2021-01-18 18:37:32,421-0500 ERROR 
> otopi.ovirt_hosted_engine_setup.ansible_utils 
> ansible_utils._process_output:109 fatal: [localhost]: UNREACHABLE! => 
> {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: 
> connect to host rhvm.example.org port 22: No route to host", "skip_reason": 
> "Host localhost is unreachable", "unreachable": true}
>
> I find the unreachable message a bit odd, as at this stage all that has 
> happened is that the local hosted-engine has been brought up to be 
> configured, and so it is running on virbr0, not on my actual network. As a 
> result, that DNS address will never resolve, and the IP it resolves to won't 
> be up. I gave the installation script permission to modify the local 
> /etc/hosts but this hasn't improved things.
>
> I presume I'm missing something in the install process, or earlier on in the 
> logs, but I've been scanning for errors and possible clues to no avail.
>
> Any and all help greatly appreciated!

Please check/share, on the engine machine under /var/log/ovirt-engine,
or, if inaccessible, on the host, under
/var/log/ovirt-hosted-engine-setup/engine-logs-*:

engine.log

host-deploy/*

Good luck and best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CORIDDYMUPDHGBUGL4DV5IZ4T5QZPJGL/


[ovirt-users] Install of RHV 4.4 failing - "Host is not up, please check logs, perhaps also on the engine machine"

2021-01-19 Thread james . freeman
Hi all

I am in the process of migrating a RHV 4.3 setup to RHV 4.4 and struggling with 
the setup. I am installing on RHEL 8.3, using settings backed up from the RHV 
4.3 install (via 'hosted-engine --deploy --restore-from-file=backup.bck').

The install process always fails at the same point for me at the moment, and I 
can't figure out how to get past it. As far as install progress goes, the local 
hosted-engine comes up and runs on the node. I have been able to grep for 
local_vm_ip in the logs, and can SSH into it with the password I set during the 
setup phase.

However the install playbooks always fail with:
2021-01-18 18:38:00,086-0500 ERROR otopi.plugins.gr_he_common.core.misc 
misc._terminate:167 Hosted Engine deployment failed: please check the logs for 
the issue, fix accordingly or re-deploy from scratch.

Earlier in the logs, I note the following:
2021-01-18 18:34:51,258-0500 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Host is not up, 
please check logs, perhaps also on the engine machine"}
2021-01-18 18:37:16,661-0500 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
fatal: [localhost]: FAILED! => {"changed": false, "msg": "The system may not be 
provisioned according to the playbook results: please check the logs for the 
issue, fix accordingly or re-deploy from scratch.\n"}
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/otopi/context.py", line 132, in 
_executeMethod
method['method']()
  File 
"/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py",
 line 435, in _closeup
raise RuntimeError(_('Failed executing ansible-playbook'))
RuntimeError: Failed executing ansible-playbook
2021-01-18 18:37:18,996-0500 ERROR otopi.context context._executeMethod:154 
Failed to execute stage 'Closing up': Failed executing ansible-playbook
2021-01-18 18:37:32,421-0500 ERROR 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:109 
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to 
connect to the host via ssh: ssh: connect to host rhvm.example.org port 22: No 
route to host", "skip_reason": "Host localhost is unreachable", "unreachable": 
true}

I find the unreachable message a bit odd, as at this stage all that has 
happened is that the local hosted-engine has been brought up to be configured, 
and so it is running on virbr0, not on my actual network. As a result, that DNS 
address will never resolve, and the IP it resolves to won't be up. I gave the 
installation script permission to modify the local /etc/hosts but this hasn't 
improved things. 

I presume I'm missing something in the install process, or earlier on in the 
logs, but I've been scanning for errors and possible clues to no avail.

Any and all help greatly appreciated!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/765U6UQFKK4NRMP4FQIKMJAQEXJKUFLH/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Yep, BZ is 

https://bugzilla.redhat.com/show_bug.cgi?id=1539837 

https://bugzilla.redhat.com/show_bug.cgi?id=1904669 

https://bugzilla.redhat.com/show_bug.cgi?id=1905113 


Thanks,
k

> On 19 Jan 2021, at 11:05, Gianluca Cecchi  wrote:
> 
> perhaps a copy paste error about the bugzilla entries? They are the same 
> number...

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QCYCKFFM2LSZSZZIQX4Q5GEOYDO2I5GU/


[ovirt-users] Re: VM Disks order

2021-01-19 Thread Arik Hadas
On Mon, Jan 18, 2021 at 1:12 PM Erez Zarum  wrote:

> When attaching a disk it is not possible to set the disk order nor modify
> the order later.
> Example:
> A new VM is provisioned with 5 disks, Disk0 is the OS and then later
> attached disks by order up to Disk4.
> Removing Disk3 and then later attaching does not promise it will be
> attached back as Disk3.
> In most other platforms it is possible to set the order.
>
> Am i missing something? if not, is there a plan to add this feature?
>

What version of oVirt do you use?
The scenario you describe is likely to be fixed in 4.4.3 by [1] for SCSI
disks and assuming the VM keeps running between the detach and attach
operations

[1] https://gerrit.ovirt.org/#/c/ovirt-engine/+/28/


> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/7TPSXUQ4WKVAHUP4QV5GITAXFF2BJBYY/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WUVWLT4373P4TVSZ3KLGGQIRJAATLQMY/


[ovirt-users] Re: noVNC error.

2021-01-19 Thread James Loker-Steele via Users
This is to do with the cert. 
it happened here
Turn off Vnc encryption in the compute> cluster > console settings.

Might need to maintenance and reinstall the host that you cannot access.

Sent from my iPhone

> On 19 Jan 2021, at 07:49, tommy  wrote:
> 
> 
> Hi:
>  
> I use novnc console can connect to engine vm.
>  
> But when I using novnc console connect to other vm in other datacenter, 
> failed.
>  
> The ovirt-websocket-proxy log is:
>  
> Jan 19 15:43:32 ooeng.tltd.com ovirt-websocket-proxy.py[1312]: 192.168.10.104 
> - - [19/Jan/2021 15:43:32] connecting to: ohost2.tltd.com:5900 (using SSL)
> Jan 19 15:43:32 ooeng.tltd.com ovirt-websocket-proxy.py[1312]: 
> ovirt-websocket-proxy[24096] INFO msg:824 handler exception: [SSL: 
> UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:618)
>  
> What reason ?
>  
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/MHY4QCLAIDPL5AHJ6YURGKKJEM73LZT2/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WUNH7YSO7M2W2CKK5RGCDPNNI2XLJPTY/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Benny Zlotnik
>Just for clarification: when you say Managed Block Storage you mean cinderlib 
>integration, >correct?
>Is still this one below the correct reference page for 4.4?
>https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html
yes

>So are the manual steps still needed (and also repo config that seems against 
>pike)?
>Or do you have an updated link for configuring cinderlib in 4.4?
It is slightly outdated, I, and other users have successfully used
ussuri. I will update the feature page today.

>Is this true only for Self Hosted Engine Environment or also if I have an 
>external engine?
External engine as well. The reason this is required is that only
regular domains can serve as master domains which is required for a
host to get the SPM role
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JUV5F6GKRNFOCXB2BPW2ZY4UUZZ25DTV/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 9:01 AM Konstantin Shalygin  wrote:

> Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if
> you wan’t use Ceph Storage.
> Current storage team support in oVirt just can break something and do not
> work with this anymore, take a look what I talking about: in [1], [2], [3]
>
>
> k
>
> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
> [3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453
>
>
>
>
perhaps a copy paste error about the bugzilla entries? They are the same
number...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3XYMG4QUM3TTTL45XGXUWA6DOWIWDQ64/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Gianluca Cecchi
On Tue, Jan 19, 2021 at 8:43 AM Benny Zlotnik  wrote:

> Ceph support is available via Managed Block Storage (tech preview), it
> cannot be used instead of gluster for hyperconverged setups.
>
>
Just for clarification: when you say Managed Block Storage you mean
cinderlib integration, correct?
Is still this one below the correct reference page for 4.4?
https://www.ovirt.org/develop/release-management/features/storage/cinderlib-integration.html

So are the manual steps still needed (and also repo config that seems
against pike)?
Or do you have an updated link for configuring cinderlib in 4.4?

Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter
>
>
Is this true only for Self Hosted Engine Environment or also if I have an
external engine?

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SHSQO6WLMTVDNTVFACLOEFOFOD3GRYLW/


[ovirt-users] Re: Managed Block Storage and more

2021-01-19 Thread Konstantin Shalygin
Shantur, I recommend to look to OpenStack or some OpenNebula/Proxmox if you 
wan’t use Ceph Storage.
Current storage team support in oVirt just can break something and do not work 
with this anymore, take a look what I talking about: in [1], [2], [3]


k

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[2] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 

[3] https://bugzilla.redhat.com/show_bug.cgi?id=1899453 




> On 19 Jan 2021, at 10:40, Benny Zlotnik  wrote:
> 
> Ceph support is available via Managed Block Storage (tech preview), it
> cannot be used instead of gluster for hyperconverged setups.
> 
> Moreover, it is not possible to use a pure Managed Block Storage setup
> at all, there has to be at least one regular storage domain in a
> datacenter

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NQG6XHDYZT7WGCHDIUCY55IS7F5G5OVC/