[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread Strahil Nikolov via Users
I think that it's complaining for the firewall. Try to restore with running 
firewalld.

Best Regards,
Strahil Nikolov






В понеделник, 18 януари 2021 г., 17:52:04 Гринуич+2, penguin pages 
 написа: 







Following document to redploy engine...

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/cleaning_up_a_failed_self-hosted_engine_deployment

 From Host which had listed engine as in its inventory ### 
[root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup
This will de-configure the host to run ovirt-hosted-engine-setup from scratch.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Destroy hosted-engine VM ===-
error: failed to get domain 'HostedEngine'

  -=== Stop HA services ===-
  -=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
  -=== Disconnecting the hosted-engine storage domain ===-
  -=== De-configure VDSM networks ===-
ovirtmgmt
A previously configured management bridge has been found on the system, this 
will try to de-configure it. Under certain circumstances you can loose network 
connection.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Stop other services ===-
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd.socket
  libvirtd-ro.socket
  libvirtd-admin.socket
  -=== De-configure external daemons ===-
Removing database file /var/lib/vdsm/storage/managedvolume.db
  -=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
- removing /etc/ovirt-hosted-engine/hosted-engine.conf
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-key.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
- removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-key.pem
- removing /etc/pki/CA/cacert.pem
- removing /etc/pki/libvirt/clientcert.pem
- removing /etc/pki/libvirt/private/clientkey.pem
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
? /var/tmp/localvm* already missing
  -=== Removing IP Rules ===-
[root@medusa ~]# 
[root@medusa ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
          During customization use CTRL-D to abort.
          Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
          The locally running engine will be used to configure a new storage 
domain and create a VM there.


1) Error about firewall
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 
'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'' failed. The error was: error while evaluating conditional 
(firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be 
in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\n    register: 
firewalld_s\n  - name: Enforce firewalld status\n    ^ here\n"}

###  Hmm.. that is dumb.. its disabled to avoid issues
[root@medusa ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
  Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor 
preset: enabled)
  Active: inactive (dead)
    Docs: man:firewalld(1)


2) Error about ssh to host ovirte01.penguinpages.local 
[ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed 
to connect to the host via ssh: ssh: connect to host 
ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host 
localhost is unreachable", "unreachable": true}

###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should 
be offline till it does it.  And as VMs to run DNS are down.. I am using hosts 
file to ignite the enviornment.  Not sure what it expects 
[root@medusa ~]# cat /etc/hosts |grep ovir
172.16.100.31 ovirte01.penguinpages.local ovirte01



Did not go well. 

Attached is deployment details as well as logs. 

Maybe someone can point out what I am doing wrong.  Last time I did this I did 
the HCI 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread penguin pages


After looking into logs..  I think issue is about storage where it should 
deploy.   Wizard did not seem to focus on that..  I A$$umed it was aware of 
volume per previous detected deployment... but...



2021-01-18 10:34:07,917-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : Clean 
local storage pools]
2021-01-18 10:34:08,418-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 ok: [localhost]
2021-01-18 10:34:08,919-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Destroy local storage-pool {{ he_local_vm_dir | basename }}]
2021-01-18 10:34:09,320-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'Unexpected templating type error occurred on (virsh -c 
qemu:///system?authfile={{ he_libvirt_authfile }} pool-destroy {{ 
he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not 
NoneType', '_ansible_no_log': False}
2021-01-18 10:34:09,421-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error 
occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} 
pool-destroy {{ he_local_vm_dir | basename }}): expected str, bytes or 
os.PathLike object, not NoneType"}
2021-01-18 10:34:09,821-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Undefine local storage-pool {{ he_local_vm_dir | basename }}]
2021-01-18 10:34:10,223-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'Unexpected templating type error occurred on (virsh -c 
qemu:///system?authfile={{ he_libvirt_authfile }} pool-undefine {{ 
he_local_vm_dir | basename }}): expected str, bytes or os.PathLike object, not 
NoneType', '_ansible_no_log': False}
2021-01-18 10:34:10,323-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "Unexpected templating type error 
occurred on (virsh -c qemu:///system?authfile={{ he_libvirt_authfile }} 
pool-undefine {{ he_local_vm_dir | basename }}): expected str, bytes or 
os.PathLike object, not NoneType"}
2021-01-18 10:34:10,724-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
2021-01-18 10:34:11,125-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'The task includes an option with an undefined variable. The error was: 
\'local_vm_disk_path\' is undefined\n\nThe error appears to be in 
\'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml\':
 line 16, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nchanged_when: 
true\n  - name: Destroy local storage-pool {{ 
local_vm_disk_path.split(\'/\')[5] }}\n^ here\nWe could be wrong, but this 
one looks like it might be an issue with\nmissing quotes. Always quote template 
expression brackets when they\nstart a value. For instance:\n\n
with_items:\n  - {{ foo }}\n\nShould be written as:\n\nwith_items:\n
  - "{{ foo }}"\n', '_ansible_no_log': False}
2021-01-18 10:34:11,226-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
ignored: [localhost]: FAILED! => {"msg": "The task includes an option with an 
undefined variable. The error was: 'local_vm_disk_path' is undefined\n\nThe 
error appears to be in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/clean_local_storage_pools.yml':
 line 16, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nchanged_when: 
true\n  - name: Destroy local storage-pool {{ local_vm_disk_path.split('/')[5] 
}}\n^ here\nWe could be wrong, but this one looks like it might be an issue 
with\nmissing quotes. Always quote template expression brackets when 
they\nstart a value. For instance:\n\nwith_items:\n  - {{ foo 
}}\n\nShould be written as:\n\nwith_items:\n  - \"{{ foo }}\"\n"}
2021-01-18 10:34:11,626-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.ovirt.hosted_engine_setup : 
Undefine local storage-pool {{ local_vm_disk_path.split('/')[5] }}]
2021-01-18 10:34:12,028-0500 DEBUG 
otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 
{'msg': 'The task includes an option with an undefined variable. The error was: 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-18 Thread penguin pages


Following document to redploy engine...

https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/cleaning_up_a_failed_self-hosted_engine_deployment

 From Host which had listed engine as in its inventory ### 
[root@medusa ~]# /usr/sbin/ovirt-hosted-engine-cleanup
 This will de-configure the host to run ovirt-hosted-engine-setup from scratch.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Destroy hosted-engine VM ===-
error: failed to get domain 'HostedEngine'

  -=== Stop HA services ===-
  -=== Shutdown sanlock ===-
shutdown force 1 wait 0
shutdown done 0
  -=== Disconnecting the hosted-engine storage domain ===-
  -=== De-configure VDSM networks ===-
ovirtmgmt
 A previously configured management bridge has been found on the system, this 
will try to de-configure it. Under certain circumstances you can loose network 
connection.
Caution, this operation should be used with care.

Are you sure you want to proceed? [y/n]
y
  -=== Stop other services ===-
Warning: Stopping libvirtd.service, but it can still be activated by:
  libvirtd.socket
  libvirtd-ro.socket
  libvirtd-admin.socket
  -=== De-configure external daemons ===-
Removing database file /var/lib/vdsm/storage/managedvolume.db
  -=== Removing configuration files ===-
? /etc/init/libvirtd.conf already missing
- removing /etc/libvirt/nwfilter/vdsm-no-mac-spoofing.xml
? /etc/ovirt-hosted-engine/answers.conf already missing
- removing /etc/ovirt-hosted-engine/hosted-engine.conf
- removing /etc/vdsm/vdsm.conf
- removing /etc/pki/vdsm/certs/cacert.pem
- removing /etc/pki/vdsm/certs/vdsmcert.pem
- removing /etc/pki/vdsm/keys/vdsmkey.pem
- removing /etc/pki/vdsm/libvirt-migrate/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-cert.pem
- removing /etc/pki/vdsm/libvirt-migrate/server-key.pem
- removing /etc/pki/vdsm/libvirt-spice/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-cert.pem
- removing /etc/pki/vdsm/libvirt-spice/server-key.pem
- removing /etc/pki/vdsm/libvirt-vnc/ca-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-cert.pem
- removing /etc/pki/vdsm/libvirt-vnc/server-key.pem
- removing /etc/pki/CA/cacert.pem
- removing /etc/pki/libvirt/clientcert.pem
- removing /etc/pki/libvirt/private/clientkey.pem
? /etc/pki/ovirt-vmconsole/*.pem already missing
- removing /var/cache/libvirt/qemu
? /var/run/ovirt-hosted-engine-ha/* already missing
? /var/tmp/localvm* already missing
  -=== Removing IP Rules ===-
[root@medusa ~]# 
[root@medusa ~]# hosted-engine --deploy
[ INFO  ] Stage: Initializing
[ INFO  ] Stage: Environment setup
  During customization use CTRL-D to abort.
  Continuing will configure this host for serving as hypervisor and 
will create a local VM with a running engine.
  The locally running engine will be used to configure a new storage 
domain and create a VM there.


1) Error about firewall
[ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The conditional check 
'firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'' failed. The error was: error while evaluating conditional 
(firewalld_s.status.SubState != 'running' or firewalld_s.status.LoadState == 
'masked'): 'dict object' has no attribute 'SubState'\n\nThe error appears to be 
in 
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/pre_checks/validate_firewalld.yml':
 line 8, column 5, but may\nbe elsewhere in the file depending on the exact 
syntax problem.\n\nThe offending line appears to be:\n\nregister: 
firewalld_s\n  - name: Enforce firewalld status\n^ here\n"}

###  Hmm.. that is dumb.. its disabled to avoid issues
[root@medusa ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor 
preset: enabled)
   Active: inactive (dead)
 Docs: man:firewalld(1)


2) Error about ssh to host ovirte01.penguinpages.local 
[ ERROR ] fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed 
to connect to the host via ssh: ssh: connect to host 
ovirte01.penguinpages.local port 22: No route to host", "skip_reason": "Host 
localhost is unreachable", "unreachable": true}

###.. Hmm.. well.. no kidding.. it is suppose to deploy the engine so IP should 
be offline till it does it.  And as VMs to run DNS are down.. I am using hosts 
file to ignite the enviornment.  Not sure what it expects 
[root@medusa ~]# cat /etc/hosts |grep ovir
172.16.100.31 ovirte01.penguinpages.local ovirte01



Did not go well. 

Attached is deployment details as well as logs. 

Maybe someone can point out what I am doing wrong.  Last time I did this I did 
the HCI wizard.. but the hosted engine dashboard for "Virtualization" in 
cockpit https://172.16.100.101:9090/ovirt-dashboard#/he   no longer offers a 
deployment UI option.



## Deployment 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-16 Thread penguin pages
I do not know what happened to my other VMs.  Two are important  ns01 and ns02 
which are my IDM cluster nodes with also plex and other utilities / services.

Most of the rest are throw away VMs for testing / OCP /OKD

I think I may have to redeploy... but concerns are

1)  CentOS8 Streams has package conflicts with cockpit and ovirt
https://bugzilla.redhat.com/show_bug.cgi?id=1917011

2) I do have a backup.. but hoping the deployment could redeploy and use 
existing PostGres DB... and so save rebuild.  I think I have a backup.. but it 
is weeks old.. and.. so lots of things changed.  (need to automate backups to 
my NAS .. on todo list now). 

I think I will try to redeploy and see how it goes...  Thanks for help.. I am 
sure this drama fest is not over.  More to come.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/AKRPZ6UAIC6VVFLNG3PZILFNBXXOMQLC/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-16 Thread Strahil Nikolov via Users

> [root@medusa qemu]# virsh define /tmp/ns01.xml
> Please enter your authentication name: admin
> Please enter your password:
> Domain ns01 defined from /tmp/ns01.xml
> 
> [root@medusa qemu]# virsh start /tmp/ns01.xml
> Please enter your authentication name: admin
> Please enter your password:
> error: failed to get domain '/tmp/ns01.xml'
> 
> [root@medusa qemu]#

When you define the file, you start the VM by name.

After defining run 'virsh list'.
Based on your xml you should use 'virsh start ns01'.

Notice:As you can see my HostedEngine uses '/var/run/vdsm/' instead
of '/rhev/data-center/mnt/glusterSD/...' which is actually just a
symbolic link.



  
  

  
  
  
  8ec7a465-151e-4ac3-92a7-
965ecf854501
  
  




When you start the HE, it might complain of that missing so you have to
create it.

If it complains about network vdsm-ovirtmgmt missing, you can also
define it  via virsh:
# cat vdsm-ovirtmgmt.xml  


  vdsm-ovirtmgmt

  8ded486e-e681-4754-af4b-5737c2b05405

  

  



Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SOMUSIHCIKJRQDIUEL5VUZHFJOXH4BV6/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-16 Thread Alex K
On Sat, Jan 16, 2021, 16:08 penguin pages  wrote:

>
> Thanks for help following below.
>
> 1) Auth to Libvirtd and show VM "hosted engine"  but also now that I
> manually registered "ns01" per above
> [root@medusa ~]# vdsm-client Host getVMList
> [
> {
> "status": "Down",
> "statusTime": "2218288798",
> "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
> }
> ]
> [root@medusa ~]# virsh -c
> qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
> Welcome to virsh, the virtualization interactive terminal.
>
> Type:  'help' for help with commands
>'quit' to quit
>
> virsh # list --all
>  Id   NameState
> 
>  -HostedEngineshut off
>  -HostedEngineLocal   shut off
>  -ns01shut off
>
> 2) Start VM  but seems network is needed first
> virsh # start HostedEngine
> error: Failed to start domain HostedEngine
> error: Network not found: no network with matching name 'vdsm-ovirtmgmt'
>
> virsh # start HostedEngineLocal
> error: Failed to start domain HostedEngineLocal
> error: Requested operation is not valid: network 'default' is not active
>
> 3) Start Networks:  This is "next next" HCI+Gluster build so it called it
> "ovirtmgmt"
> virsh # net-list
>  Name  StateAutostart   Persistent
> 
>  ;vdsmdummy;   active   no  no
> virsh # net-autostart --network default
> Network default marked as autostarted
> virsh # net-start default
> Network default started
> virsh # start HostedEngineLocal
> error: Failed to start domain HostedEngineLocal
> error: Cannot access storage file '/var/tmp/localvmn4khg_ak/seed.iso': No
> such file or directory
>
> <<<>>
>
> virsh # dumpxml HostedEngineLocal
> 
>   HostedEngineLocal
>   bb2006ce-838b-47a3-a049-7e3e5c7bb049
>   
> http://libosinfo.org/xmlns/libvirt/domain/1.0;>
>   http://redhat.com/rhel/8.0"/>
> 
>   
>   16777216
>   16777216
>   4
>   
> hvm
> 
> 
>   
>   
> 
> 
>   
>   
>   
> 
>   
>   destroy
>   restart
>   destroy
>   
> 
> 
>   
>   
> /usr/libexec/qemu-kvm
> 
>   
>file='/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645'/>
>   
>function='0x0'/>
> 
> 
>   
>   
>   
>   
>   
> 
> 
> 
>function='0x2'/>
> 
> 
> 
>   
>   
>function='0x0' multifunction='on'/>
> 
> 
>   
>   
>function='0x1'/>
> 
> 
>   
>   
>function='0x2'/>
> 
> 
>   
>   
>function='0x3'/>
> 
> 
>   
>   
>function='0x4'/>
> 
> 
>function='0x0'/>
> 
> 
>   
>   
>   
>function='0x0'/>
> 
> 
>   
> 
>   
> 
> 
>   
> 
> 
>   
>   
> 
> 
> 
> 
>   
> 
> 
>   
>function='0x0'/>
> 
> 
> 
>   /dev/random
>function='0x0'/>
> 
>   
> 
>
> virsh #
>
> ##So not sure what hosted engine needs an ISO image.  Can I remove this?
> virsh # change-media HostedEngineLocal /var/tmp/localvmn4khg_ak/seed.iso
> --eject
> Successfully ejected media.
>
> virsh # start HostedEngineLocal
> error: Failed to start domain HostedEngineLocal
> error: Cannot access storage file
> '/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645'
> (as uid:107, gid:107): No such file or directory
> [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree |grep
> e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645
> [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# pwd
> /gluster_bricks/engine/engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde
> [root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree
> .
> ├── dom_md
> │   ├── ids
> │   ├── inbox
> │   ├── leases
> │   ├── metadata
> │   ├── outbox
> │   └── xleases
> ├── ha_agent
> │   ├── hosted-engine.lockspace ->
> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/6023f2b1-ea6e-485b-9ac2-8decd5f7820d/b38a5e37-fac4-4c23-a0c4-7359adff619c
> │   └── hosted-engine.metadata ->
> /run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/77082dd8-7cb5-41cc-a69f-0f4c0380db23/38d552c5-689d-47b7-9eea-adb308da8027
> ├── images
> │   ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc
> │   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480
> │   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease
> │   │   └── e4e26573-09a5-43fa-91ec-37d12de46480.meta
> │   ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6
> │   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a
> │   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease
> │   │   └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta
> │   ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d
> │   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c
> │   │   ├── 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-16 Thread penguin pages

Thanks for help following below.

1) Auth to Libvirtd and show VM "hosted engine"  but also now that I manually 
registered "ns01" per above
[root@medusa ~]# vdsm-client Host getVMList
[
{
"status": "Down",
"statusTime": "2218288798",
"vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
}
]
[root@medusa ~]# virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
Welcome to virsh, the virtualization interactive terminal.

Type:  'help' for help with commands
   'quit' to quit

virsh # list --all
 Id   NameState

 -HostedEngineshut off
 -HostedEngineLocal   shut off
 -ns01shut off

2) Start VM  but seems network is needed first
virsh # start HostedEngine
error: Failed to start domain HostedEngine
error: Network not found: no network with matching name 'vdsm-ovirtmgmt'

virsh # start HostedEngineLocal
error: Failed to start domain HostedEngineLocal
error: Requested operation is not valid: network 'default' is not active

3) Start Networks:  This is "next next" HCI+Gluster build so it called it 
"ovirtmgmt"
virsh # net-list
 Name  StateAutostart   Persistent

 ;vdsmdummy;   active   no  no
virsh # net-autostart --network default
Network default marked as autostarted
virsh # net-start default
Network default started
virsh # start HostedEngineLocal
error: Failed to start domain HostedEngineLocal
error: Cannot access storage file '/var/tmp/localvmn4khg_ak/seed.iso': No such 
file or directory

<<<>>

virsh # dumpxml HostedEngineLocal

  HostedEngineLocal
  bb2006ce-838b-47a3-a049-7e3e5c7bb049
  
http://libosinfo.org/xmlns/libvirt/domain/1.0;>
  http://redhat.com/rhel/8.0"/>

  
  16777216
  16777216
  4
  
hvm


  
  


  
  
  

  
  destroy
  restart
  destroy
  


  
  
/usr/libexec/qemu-kvm

  
  
  
  


  
  
  
  
  



  



  
  
  


  
  
  


  
  
  


  
  
  


  
  
  


  


  
  
  
  


  

  


  


  
  




  


  
  



  /dev/random
  

  


virsh #

##So not sure what hosted engine needs an ISO image.  Can I remove this?
virsh # change-media HostedEngineLocal /var/tmp/localvmn4khg_ak/seed.iso --eject
Successfully ejected media.

virsh # start HostedEngineLocal
error: Failed to start domain HostedEngineLocal
error: Cannot access storage file 
'/var/tmp/localvmn4khg_ak/images/e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645'
 (as uid:107, gid:107): No such file or directory
[root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree |grep 
e2e4d97c-3430-4880-888e-84c283a80052/0f78b6f7-7755-4fe5-90e3-d41df791a645
[root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# pwd
/gluster_bricks/engine/engine/3afc47ba-afb9-413f-8de5-8d9a2f45ecde
[root@medusa 3afc47ba-afb9-413f-8de5-8d9a2f45ecde]# tree
.
├── dom_md
│   ├── ids
│   ├── inbox
│   ├── leases
│   ├── metadata
│   ├── outbox
│   └── xleases
├── ha_agent
│   ├── hosted-engine.lockspace -> 
/run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/6023f2b1-ea6e-485b-9ac2-8decd5f7820d/b38a5e37-fac4-4c23-a0c4-7359adff619c
│   └── hosted-engine.metadata -> 
/run/vdsm/storage/3afc47ba-afb9-413f-8de5-8d9a2f45ecde/77082dd8-7cb5-41cc-a69f-0f4c0380db23/38d552c5-689d-47b7-9eea-adb308da8027
├── images
│   ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease
│   │   └── e4e26573-09a5-43fa-91ec-37d12de46480.meta
│   ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease
│   │   └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta
│   ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease
│   │   └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta
│   ├── 685309b1-1ae9-45f3-90c3-d719a594482d
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease
│   │   └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta
│   ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease
│   │   └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta
│   └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease
│   └── 38d552c5-689d-47b7-9eea-adb308da8027.meta
└── master
├── tasks
│   ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread Alex K
On Fri, Jan 15, 2021, 22:04 penguin pages  wrote:

>
> Thanks for replies.
>
> Here is where it is at:
>
> # Two nodes think no VMs exist
> [root@odin ~]# vdsm-client Host getVMList
> []
>
> #One showing one VM but down
> [root@medusa ~]# vdsm-client Host getVMList
> [
> {
> "status": "Down",
> "statusTime": "2153886148",
> "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
> }
> ]
> [root@medusa ~]# vdsm-client Host getAllVmStats
> [
> {
> "exitCode": 1,
> "exitMessage": "VM terminated with error",
> "exitReason": 1,
> "status": "Down",
> "statusTime": "2153916276",
> "vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
> }
> ]
> [root@medusa ~]# vdsm-client VM cont
> vmID="69ab4f82-1a53-42c8-afca-210a3a2715f1"
> vdsm-client: Command VM.cont with args {'vmID':
> '69ab4f82-1a53-42c8-afca-210a3a2715f1'} failed:
> (code=16, message=Unexpected exception)
>
>
> # Assuming that ID represents the hosted-engine I tried to start it
> [root@medusa ~]# hosted-engine --vm-start
> The hosted engine configuration has not been retrieved from shared
> storage. Please ensure that ovirt-ha-agent is running and the storage
> server is reachable.
>
> # Back to ovirt-ha-agent being fubar and stoping things.
>
> I have about 8 or so VMs on the cluster. Two are my IDM nodes which has
> DNS and other core services.. which is what I am really trying to get up ..
> even if manual until I figure out oVirt issue.  I think you are correct.
> "engine" volume is for just the engine.  Data is where the other VMs are
>
> [root@medusa images]# tree
> .
> ├── 335c6b1a-d8a5-4664-9a9c-39744d511af8
> │   ├── 579323ad-bf7b-479b-b682-6e1e234a7908
> │   ├── 579323ad-bf7b-479b-b682-6e1e234a7908.lease
> │   └── 579323ad-bf7b-479b-b682-6e1e234a7908.meta
> ├── d318cb8f-743a-461b-b246-75ffcde6bc5a
> │   ├── c16877d0-eb23-42ef-a06e-a3221ea915fc
> │   ├── c16877d0-eb23-42ef-a06e-a3221ea915fc.lease
> │   └── c16877d0-eb23-42ef-a06e-a3221ea915fc.meta
> └── junk
> ├── 296163f2-846d-4a2c-9a4e-83a58640b907
> │   ├── 376b895f-e0f2-4387-b038-fbef4705fbcc
> │   ├── 376b895f-e0f2-4387-b038-fbef4705fbcc.lease
> │   └── 376b895f-e0f2-4387-b038-fbef4705fbcc.meta
> ├── 45a478d7-4c1b-43e8-b106-7acc75f066fa
> │   ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20
> │   ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20.lease
> │   └── b5249e6c-0ba6-4302-8e53-b74d2b919d20.meta
> ├── d8b708c1-5762-4215-ae1f-0e57444c99ad
> │   ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9
> │   ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.lease
> │   └── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.meta
> └── eaf12f3c-301f-4b61-b5a1-0c6d0b0a7f7b
> ├── fbf3bf59-a23a-4c6f-b66e-71369053b406
> ├── fbf3bf59-a23a-4c6f-b66e-71369053b406.lease
> └── fbf3bf59-a23a-4c6f-b66e-71369053b406.meta
>
> 7 directories, 18 files
> [root@medusa images]# cd /media/engine/
> [root@medusa engine]# ls
> 3afc47ba-afb9-413f-8de5-8d9a2f45ecde
> [root@medusa engine]# tree
> .
> └── 3afc47ba-afb9-413f-8de5-8d9a2f45ecde
> ├── dom_md
> │   ├── ids
> │   ├── inbox
> │   ├── leases
> │   ├── metadata
> │   ├── outbox
> │   └── xleases
> ├── ha_agent
> ├── images
> │   ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc
> │   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480
> │   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease
> │   │   └── e4e26573-09a5-43fa-91ec-37d12de46480.meta
> │   ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6
> │   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a
> │   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease
> │   │   └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta
> │   ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d
> │   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c
> │   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease
> │   │   └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta
> │   ├── 685309b1-1ae9-45f3-90c3-d719a594482d
> │   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0
> │   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease
> │   │   └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta
> │   ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e
> │   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af
> │   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease
> │   │   └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta
> │   └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23
> │   ├── 38d552c5-689d-47b7-9eea-adb308da8027
> │   ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease
> │   └── 38d552c5-689d-47b7-9eea-adb308da8027.meta
> └── master
> ├── tasks
> │   ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba
> │   │   └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0
> │   ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp
> │   ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp
> │   ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup
> │   │   └── 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages

Thanks for replies.

Here is where it is at:

# Two nodes think no VMs exist
[root@odin ~]# vdsm-client Host getVMList
[]

#One showing one VM but down
[root@medusa ~]# vdsm-client Host getVMList
[
{
"status": "Down",
"statusTime": "2153886148",
"vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
}
]
[root@medusa ~]# vdsm-client Host getAllVmStats
[
{
"exitCode": 1,
"exitMessage": "VM terminated with error",
"exitReason": 1,
"status": "Down",
"statusTime": "2153916276",
"vmId": "69ab4f82-1a53-42c8-afca-210a3a2715f1"
}
]
[root@medusa ~]# vdsm-client VM cont vmID="69ab4f82-1a53-42c8-afca-210a3a2715f1"
vdsm-client: Command VM.cont with args {'vmID': 
'69ab4f82-1a53-42c8-afca-210a3a2715f1'} failed:
(code=16, message=Unexpected exception)


# Assuming that ID represents the hosted-engine I tried to start it
[root@medusa ~]# hosted-engine --vm-start
The hosted engine configuration has not been retrieved from shared storage. 
Please ensure that ovirt-ha-agent is running and the storage server is 
reachable.

# Back to ovirt-ha-agent being fubar and stoping things.

I have about 8 or so VMs on the cluster. Two are my IDM nodes which has DNS and 
other core services.. which is what I am really trying to get up .. even if 
manual until I figure out oVirt issue.  I think you are correct. "engine" 
volume is for just the engine.  Data is where the other VMs are

[root@medusa images]# tree
.
├── 335c6b1a-d8a5-4664-9a9c-39744d511af8
│   ├── 579323ad-bf7b-479b-b682-6e1e234a7908
│   ├── 579323ad-bf7b-479b-b682-6e1e234a7908.lease
│   └── 579323ad-bf7b-479b-b682-6e1e234a7908.meta
├── d318cb8f-743a-461b-b246-75ffcde6bc5a
│   ├── c16877d0-eb23-42ef-a06e-a3221ea915fc
│   ├── c16877d0-eb23-42ef-a06e-a3221ea915fc.lease
│   └── c16877d0-eb23-42ef-a06e-a3221ea915fc.meta
└── junk
├── 296163f2-846d-4a2c-9a4e-83a58640b907
│   ├── 376b895f-e0f2-4387-b038-fbef4705fbcc
│   ├── 376b895f-e0f2-4387-b038-fbef4705fbcc.lease
│   └── 376b895f-e0f2-4387-b038-fbef4705fbcc.meta
├── 45a478d7-4c1b-43e8-b106-7acc75f066fa
│   ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20
│   ├── b5249e6c-0ba6-4302-8e53-b74d2b919d20.lease
│   └── b5249e6c-0ba6-4302-8e53-b74d2b919d20.meta
├── d8b708c1-5762-4215-ae1f-0e57444c99ad
│   ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9
│   ├── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.lease
│   └── 2536ca6d-3254-4cdc-bbd8-349ec1b8a0e9.meta
└── eaf12f3c-301f-4b61-b5a1-0c6d0b0a7f7b
├── fbf3bf59-a23a-4c6f-b66e-71369053b406
├── fbf3bf59-a23a-4c6f-b66e-71369053b406.lease
└── fbf3bf59-a23a-4c6f-b66e-71369053b406.meta

7 directories, 18 files
[root@medusa images]# cd /media/engine/
[root@medusa engine]# ls
3afc47ba-afb9-413f-8de5-8d9a2f45ecde
[root@medusa engine]# tree
.
└── 3afc47ba-afb9-413f-8de5-8d9a2f45ecde
├── dom_md
│   ├── ids
│   ├── inbox
│   ├── leases
│   ├── metadata
│   ├── outbox
│   └── xleases
├── ha_agent
├── images
│   ├── 1dc69552-dcc6-484d-8149-86c93ff4b8cc
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480
│   │   ├── e4e26573-09a5-43fa-91ec-37d12de46480.lease
│   │   └── e4e26573-09a5-43fa-91ec-37d12de46480.meta
│   ├── 375d2483-ee83-4cad-b421-a5a70ec06ba6
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a
│   │   ├── f936d4be-15e3-4983-8bf0-9ba5b97e638a.lease
│   │   └── f936d4be-15e3-4983-8bf0-9ba5b97e638a.meta
│   ├── 6023f2b1-ea6e-485b-9ac2-8decd5f7820d
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c
│   │   ├── b38a5e37-fac4-4c23-a0c4-7359adff619c.lease
│   │   └── b38a5e37-fac4-4c23-a0c4-7359adff619c.meta
│   ├── 685309b1-1ae9-45f3-90c3-d719a594482d
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0
│   │   ├── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.lease
│   │   └── 9eddcf51-fd15-4de5-a4b6-a83a9082dee0.meta
│   ├── 74f1b2e7-2483-4e4d-8301-819bcd99129e
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af
│   │   ├── c1888b6a-c48e-46ce-9677-02e172ef07af.lease
│   │   └── c1888b6a-c48e-46ce-9677-02e172ef07af.meta
│   └── 77082dd8-7cb5-41cc-a69f-0f4c0380db23
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027
│   ├── 38d552c5-689d-47b7-9eea-adb308da8027.lease
│   └── 38d552c5-689d-47b7-9eea-adb308da8027.meta
└── master
├── tasks
│   ├── 150927c5-bae6-45e4-842c-a7ba229fc3ba
│   │   └── 150927c5-bae6-45e4-842c-a7ba229fc3ba.job.0
│   ├── 21bba697-26e6-4fd8-ac7c-76f86b458368.temp
│   ├── 26c580b8-cdb2-4d21-9bea-96e0788025e6.temp
│   ├── 2e0e347c-fd01-404f-9459-ef175c82c354.backup
│   │   └── 2e0e347c-fd01-404f-9459-ef175c82c354.task
│   ├── 43f17022-e003-4e9f-81ec-4a01582223bd.backup
│   │   └── 43f17022-e003-4e9f-81ec-4a01582223bd.task
│   ├── 5055f61a-4cc8-459f-8fe5-19427b74a4f2.temp
│   ├── 6826c8f5-b9df-498e-a576-af0c4e7fe69c
│   │   └── 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread Alex K
On Fri, Jan 15, 2021, 20:20 Strahil Nikolov via Users 
wrote:

>
> >
> > Questions:
> > 1) I have two important VMs that have snapshots that I need to boot
> > up.  Is their a means with an HCI configuration to manually start the
> > VMs without oVirt engine being up?
> What it worked for me was:
> 1) Start a VM via "virsh"
> define a virsh alias:
> alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-
> engine/virsh_auth.conf'
> Check the host's vdsm.log ,where the VM was last started - you will
> find the VM's xml inside .
> Copy the whole xml and use virsh to define the VM "virsh define
> myVM.xml && virsh start myVM"
>
If you cannot find the xml file of the vm then you van use virt install as
if you were running in plain KVM.


> 2) vdsm-client most probably can start VMs even when the engine is down
> > 2) Is their a means to debug what is going on with the engine failing
> > to start to repair (I hate reloading as the only fix for systems)
> You can use "hosted-engine" to start the HostedEngine VM in paused mode
> . Then you can connect over spice/vnc and then unpause the VM. Booting
> the HostedEngine VM from DVD is a little bit harder. You will need to
> get the HE's xml and edit it to point to the DVD. Once you got the
> altered HE config , you can define and start.
> > 3) Is their a means to re-deploy HCI setup wizard, but use the
> > "engine" volume and so retain the VMs and templates?
> You are not expected to mix HostedEngine and other VMs on the same
> storage domain (gluster volume).
>
> Best Regards,
> Strahil Nikolov
> >
> >
> >
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> > https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> >
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPURBXOULJ7NPFS7LTTXQI3O5QRVHHY3/
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5ZKGRY63OZSEIQVSZAKTFX4EX4EJOI3/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5DYQCR2SRXIRQELJ6VFMUGG7IRKZ2GE5/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread Strahil Nikolov via Users

> 
> Questions:
> 1) I have two important VMs that have snapshots that I need to boot
> up.  Is their a means with an HCI configuration to manually start the
> VMs without oVirt engine being up?
What it worked for me was:
1) Start a VM via "virsh" 
define a virsh alias:
alias virsh='virsh -c qemu:///system?authfile=/etc/ovirt-hosted-
engine/virsh_auth.conf'
Check the host's vdsm.log ,where the VM was last started - you will
find the VM's xml inside .
Copy the whole xml and use virsh to define the VM "virsh define
myVM.xml && virsh start myVM" 

2) vdsm-client most probably can start VMs even when the engine is down
> 2) Is their a means to debug what is going on with the engine failing
> to start to repair (I hate reloading as the only fix for systems)
You can use "hosted-engine" to start the HostedEngine VM in paused mode
. Then you can connect over spice/vnc and then unpause the VM. Booting
the HostedEngine VM from DVD is a little bit harder. You will need to
get the HE's xml and edit it to point to the DVD. Once you got the
altered HE config , you can define and start.
> 3) Is their a means to re-deploy HCI setup wizard, but use the
> "engine" volume and so retain the VMs and templates?
You are not expected to mix HostedEngine and other VMs on the same
storage domain (gluster volume).

Best Regards,
Strahil Nikolov
> 
> 
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/KPURBXOULJ7NPFS7LTTXQI3O5QRVHHY3/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U5ZKGRY63OZSEIQVSZAKTFX4EX4EJOI3/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages


I found this document which was useful to explain some details on how to debug 
and roles. 
https://www.ovirt.org/images/Hosted-Engine-4.3-deep-dive.pdf

But still stuck with engine not starting.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WPUHDT7XL7UMMCOIDHAFJPFVJJXOVNMX/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages

So only two things that jump out are just

1)  ovirt-ha-agent not starting... back to python sillyness that I have no idea 
on debug
[root@medusa ~]# systemctl status ovirt-ha-agent.service
● ovirt-ha-agent.service - oVirt Hosted Engine High Availability Monitoring 
Agent
   Loaded: loaded (/usr/lib/systemd/system/ovirt-ha-agent.service; enabled; 
vendor preset: disabled)
   Active: activating (auto-restart) (Result: exit-code) since Fri 2021-01-15 
11:54:52 EST; 6s ago
  Process: 16116 ExecStart=/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent 
(code=exited, status=157)
 Main PID: 16116 (code=exited, status=157)
[root@medusa ~]# tail /var/log/messages
Jan 15 11:55:02 medusa systemd[1]: Started oVirt Hosted Engine High 
Availability Monitoring Agent.
Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine ERROR Failed to start 
necessary monitors
Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceback (most recent call 
last):#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 85, in start_monitor#012response = self._proxy.start_monitor(type, 
options)#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1112, in 
__call__#012return self.__send(self.__name, args)#012  File 
"/usr/lib64/python3.6/xmlrpc/client.py", line 1452, in __request#012
verbose=self.__verbose#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 
1154, in request#012return self.single_request(host, handler, request_body, 
verbose)#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 1166, in 
single_request#012http_conn = self.send_request(host, handler, 
request_body, verbose)#012  File "/usr/lib64/python3.6/xmlrpc/client.py", line 
1279, in send_request#012self.send_content(connection, request_body)#012  
File "/usr/lib64/python3.6/xmlrpc/client.py", line 1309, in send_content#012
connection.endheaders(request_body)#012  File 
"/usr/lib64/python3.6/http/client.py", line 1264, in endheaders#012
self._send_output(message_body, encode_chunked=encode_chunked)#012  File 
"/usr/lib64/python3.6/http/client.py", line 1040, in _send_output#012
self.send(msg)#012  File "/usr/lib64/python3.6/http/client.py", line 978, in 
send#012self.connect()#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/unixrpc.py", line 
74, in connect#012
self.sock.connect(base64.b16decode(self.host))#012FileNotFoundError: [Errno 2] 
No such file or directory#012#012During handling of the above exception, 
another exception occurred:#012#012Traceback (most recent call last):#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
131, in _run_agent#012return action(he)#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 
55, in action_proper#012return he.start_monitoring()#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 437, in start_monitoring#012self._initialize_broker()#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py",
 line 561, in _initialize_broker#012m.get('options', {}))#012  File 
"/usr/lib/python3.6/site-packages/ovirt_hosted_engine_ha/lib/brokerlink.py", 
line 91, in start_monitor#012).format(t=type, o=options, 
e=e)#012ovirt_hosted_engine_ha.lib.exceptions.RequestError: brokerlink - failed 
to start monitor via ovirt-ha-broker: [Errno 2] No such file or directory, 
[monitor: 'network', options: {'addr': '172.16.100.1', 'network_test': 'dns', 
'tcp_t_address': '', 'tcp_t_port': ''}]
Jan 15 11:55:02 medusa journal[16137]: ovirt-ha-agent 
ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying to restart agent
Jan 15 11:55:02 medusa systemd[1]: ovirt-ha-agent.service: Main process exited, 
code=exited, status=157/n/a
Jan 15 11:55:02 medusa systemd[1]: ovirt-ha-agent.service: Failed with result 
'exit-code'.
Jan 15 11:55:05 medusa upsmon[1530]: Poll UPS [nutmonitor@172.16.100.102] 
failed - [nutmonitor] does not exist on server 172.16.100.102
Jan 15 11:55:06 medusa vdsm[14589]: WARN unhandled write event
Jan 15 11:55:08 medusa vdsm[14589]: WARN unhandled close event
Jan 15 11:55:10 medusa upsmon[1530]: Poll UPS [nutmonitor@172.16.100.102] 
failed - [nutmonitor] does not exist on server 172.16.100.102


2) Notes about vdsmd  host engine "setup not finished"... but this may be issue 
of ha-agent as source
[root@medusa ~]# systemctl status vdsmd.service
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: disabled)
   Active: active (running) since Fri 2021-01-15 11:49:27 EST; 6min ago
 Main PID: 14589 (vdsmd)
Tasks: 72 (limit: 410161)
   Memory: 77.8M
   CGroup: /system.slice/vdsmd.service
   ├─14589 /usr/bin/python3 /usr/share/vdsm/vdsmd
   ├─14686 

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages


Maybe this is the rathole cause?

[root@medusa system]# systemctl status vdsmd.service
● vdsmd.service - Virtual Desktop Server Manager
   Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor 
preset: disabled)
   Active: active (running) since Fri 2021-01-15 10:53:56 EST; 5s ago
  Process: 32306 ExecStartPre=/usr/libexec/vdsm/vdsmd_init_common.sh 
--pre-start (code=exited, status=0/SUCCESS)
 Main PID: 32364 (vdsmd)
Tasks: 77 (limit: 410161)
   Memory: 77.1M
   CGroup: /system.slice/vdsmd.service
   ├─32364 /usr/bin/python3 /usr/share/vdsm/vdsmd
   ├─32480 /usr/libexec/ioprocess --read-pipe-fd 44 --write-pipe-fd 43 
--max-threads 10 --max-queued-requests 10
   ├─32488 /usr/libexec/ioprocess --read-pipe-fd 50 --write-pipe-fd 49 
--max-threads 10 --max-queued-requests 10
   ├─32494 /usr/libexec/ioprocess --read-pipe-fd 55 --write-pipe-fd 54 
--max-threads 10 --max-queued-requests 10
   ├─32501 /usr/libexec/ioprocess --read-pipe-fd 61 --write-pipe-fd 60 
--max-threads 10 --max-queued-requests 10
   └─32514 /usr/libexec/ioprocess --read-pipe-fd 65 --write-pipe-fd 61 
--max-threads 10 --max-queued-requests 10

Jan 15 10:53:55 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running nwfilter
Jan 15 10:53:55 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running dummybr
Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running tune_system
Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running test_space
Jan 15 10:53:56 medusa.penguinpages.local vdsmd_init_common.sh[32306]: vdsm: 
Running test_lo
Jan 15 10:53:56 medusa.penguinpages.local systemd[1]: Started Virtual Desktop 
Server Manager.
Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN MOM not available. 
Error: [Errno 111] Connection refused
Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN MOM not available, 
KSM stats will be missing. Error:
Jan 15 10:53:57 medusa.penguinpages.local vdsm[32364]: WARN Failed to retrieve 
Hosted Engine HA info, is Hosted Engine setup finished?
Jan 15 10:53:59 medusa.penguinpages.local vdsm[32364]: WARN Not ready yet, 
ignoring event '|virt|VM_status|69ab4f82-1a53-42c8-afca-210a3a2715f1' 
args={'69ab4f82-1a53-42c8-afca-210a3a2715f1': {'status': 'Down', 'vmId': 
'69ab4f82-1a53>
[root@medusa system]#


I googled around and the hits talk about re-running engine.. is their some kind 
of flow diagram of how to get oVirt back on its feet if it dies like this?  I 
feel like I am poking in the dark here.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4ZWNO3CQGRTUIBESB2YC4S2C2LI3ODCC/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages
[root@medusa ~]# virsh net-list
Please enter your authentication name: admin
Please enter your password:
 Name  StateAutostart   Persistent

 ;vdsmdummy;   active   no  no


# Hmm.. so not sure with oVirt this is expected.. but the defined networks I 
use are still present..
[root@medusa ~]# cat /var/lib/vdsm/persistence/netconf/nets/
101_Storage  102_DMZ  ovirtmgmtStorage

# The one that the ovirt engine is bound to is the default "ovirtmgmt" named 
one 
[root@medusa ~]# cat /var/lib/vdsm/persistence/netconf/nets/ovirtmgmt
{
"netmask": "255.255.255.0",
"ipv6autoconf": false,
"nic": "enp0s29u1u4",
"bridged": true,
"ipaddr": "172.16.100.103",
"defaultRoute": true,
"dhcpv6": false,
"gateway": "172.16.100.1",
"mtu": 1500,
"switch": "legacy",
"stp": false,
"bootproto": "none",
"nameservers": [
"172.16.100.40",
"8.8.8.8"
]
}
[root@medusa ~]#

# Looks fine to me...
[root@medusa ~]# virsh net-start ovirtmgmt
Please enter your authentication name: admin
Please enter your password:
error: failed to get network 'ovirtmgmt'
error: Network not found: no network with matching name 'ovirtmgmt'

[root@medusa ~]#


... back to googling...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GYXN5J3JW4CMEAWYVZBXQIY2VJHKOOW3/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages


Seems fresh cup cofee is helping 

Post streams update fubar issues
# Fix dependancy issues
yum update --allowerasing
# List VMs read only and bypass password issue..  uh.. bueller... I know it has 
VMs..  
[root@odin ~]# virsh --readonly list
 Id   Name   State

# Set password so virsh with admin account works
[root@odin ~]# saslpasswd2 -a libvirt admin
Password:
Again (for verification):
[root@odin ~]# virsh list --all
Please enter your authentication name: admin
Please enter your password:
 Id   NameState

 -HostedEngineLocal   shut off

[root@odin ~]# virsh start HostedEngineLocal
Please enter your authentication name: admin
Please enter your password:
error: Failed to start domain HostedEngineLocal
error: Requested operation is not valid: network 'default' is not active

[root@odin ~]#



Now looking into OVS side.  But game for other suggestions as this seems like a 
bit of a hack to get it working



___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/M6SHNBILQBXBWFW6BXBCPWVDB6UGT3XL/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages
Thanks for reply..  seems guestfish is a tool I need to do some RTFM on.

This would I think allow me to read in disk from then "engine" storage.  and 
manipulate files within it.


But is their a way to just start the VMs?

I guess I jumped from "old school virsh" to relying upon oVirt GUI... and need 
to buff up on tools that allow me to debug when engine is down.
[root@thor ~]# virsh list --all
Please enter your authentication name: admin
Please enter your password:
error: failed to connect to the hypervisor
error: authentication failed: authentication failed

<<< and I only ever use ONE password for all systems / accounts but I think 
virsh has been depricated... so maybe this is why>

I am currently poking around with 

[root@thor ~]# virt-
virt-admin   virt-clone   virt-diff
virt-host-validate   virt-ls  virt-resize  
virt-tar-in  virt-xml
virt-alignment-scan  virt-copy-in virt-edit
virt-index-validate  virt-make-fs virt-sanlock-cleanup 
virt-tar-out virt-xml-validate
virt-builder virt-copy-outvirt-filesystems 
virt-inspector   virt-pki-validatevirt-sparsify
virt-v2v
virt-builder-repository  virt-customize   virt-format  
virt-install virt-qemu-runvirt-sysprep 
virt-v2v-copy-to-local
virt-cat virt-df  virt-get-kernel  
virt-log virt-rescue  virt-tail
virt-what
[root@thor ~]#

Does anyone have example of:
1) List VMs
2) start VM named "foo"
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BFY2A6L4QEVFCCZ7TVAO7YB324LFCKWJ/


[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread Michael Jones
ouch,

re: starting vms without ovirt, you can get access to vms locally using
guestfish, i've used that before to fix vms after server room aircon
failure.

(it's not really a way to run the vms, but more for access when you can
run the vm)

export LIBGUESTFS_BACKEND=direct

should be able to copy out if you need the files before you manage to
solve your deps issues.

you can also use this method to inspect log files on the engine vm if
needed...

Kind Regards,

Mike

On 15/01/2021 15:09, penguin pages wrote:
> Update:
>
> [root@thor ~]#  dnf update --allowerasing
> Last metadata expiration check: 0:00:09 ago on Fri 15 Jan 2021 10:02:05 AM 
> EST.
> Dependencies resolved.
> =
>  Package 
> Architecture Version  
>Repository 
>Size
> =
> Upgrading:
>  cockpit-bridge  x86_64   
> 234-1.el8 
>   baseos   597 k
>  cockpit-system  noarch   
> 234-1.el8 
>   baseos   3.1 M
>  replacing  cockpit-dashboard.noarch 217-1.el8
> Removing dependent packages:
>  cockpit-ovirt-dashboard noarch   
> 0.14.17-1.el8 
>   @ovirt-4.416 M
>  ovirt-host  x86_64   
> 4.4.1-4.el8   
>   @ovirt-4.411 k
>  ovirt-hosted-engine-setup   noarch   
> 2.4.9-1.el8   
>   @ovirt-4.4   1.3 M
>
> Transaction Summary
> =
> Upgrade  2 Packages
> Remove   3 Packages
>
> Total download size: 3.7 M
> Is this ok [y/N]: y
> Downloading Packages:
> (1/2): cockpit-bridge-234-1.el8.x86_64.rpm
>   
>  160 kB/s | 597 kB 00:03
> (2/2): cockpit-system-234-1.el8.noarch.rpm
>   
>  746 kB/s | 3.1 MB 00:04
> -
> Total 
>   
>  499 kB/s | 3.7 MB 00:07
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Preparing:  
>   
>  1/1
>   Upgrading: cockpit-bridge-234-1.el8.x86_64  
>   
>  1/8
>   Upgrading: cockpit-system-234-1.el8.noarch  
>   
>  2/8
>   Erasing  : ovirt-host-4.4.1-4.el8.x86_64
>

[ovirt-users] Re: oVirt Engine no longer Starting

2021-01-15 Thread penguin pages
Update:

[root@thor ~]#  dnf update --allowerasing
Last metadata expiration check: 0:00:09 ago on Fri 15 Jan 2021 10:02:05 AM EST.
Dependencies resolved.
=
 Package 
Architecture Version
 Repository
Size
=
Upgrading:
 cockpit-bridge  x86_64 
  234-1.el8 
  baseos   597 k
 cockpit-system  noarch 
  234-1.el8 
  baseos   3.1 M
 replacing  cockpit-dashboard.noarch 217-1.el8
Removing dependent packages:
 cockpit-ovirt-dashboard noarch 
  0.14.17-1.el8 
  @ovirt-4.416 M
 ovirt-host  x86_64 
  4.4.1-4.el8   
  @ovirt-4.411 k
 ovirt-hosted-engine-setup   noarch 
  2.4.9-1.el8   
  @ovirt-4.4   1.3 M

Transaction Summary
=
Upgrade  2 Packages
Remove   3 Packages

Total download size: 3.7 M
Is this ok [y/N]: y
Downloading Packages:
(1/2): cockpit-bridge-234-1.el8.x86_64.rpm  

 160 kB/s | 597 kB 00:03
(2/2): cockpit-system-234-1.el8.noarch.rpm  

 746 kB/s | 3.1 MB 00:04
-
Total   

 499 kB/s | 3.7 MB 00:07
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing:

 1/1
  Upgrading: cockpit-bridge-234-1.el8.x86_64

 1/8
  Upgrading: cockpit-system-234-1.el8.noarch

 2/8
  Erasing  : ovirt-host-4.4.1-4.el8.x86_64  

 3/8
  Obsoleting   : cockpit-dashboard-217-1.el8.noarch 

 4/8
  Cleanup  : cockpit-system-217-1.el8.noarch

 5/8
  Erasing  :