[ovirt-users] Ovirt Cluster Setup
I have 3 Dell R515 servers all installed with centOS 7, and trying to setup an oVirt Cluster. Disks configurations: 2 x 1TB - Raid1 - OS Deployment 6 x 1TB - Raid 6 - Storage Memory is 128GB I am following this documentation https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ and I am getting the issue below: PLAY [gluster_servers] * TASK [Run a shell script] ** fatal: [ovirt2.sanren.ac.za]: FAILED! => {"msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ovirt3.sanren.ac.za]: FAILED! => {"msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ovirt1.sanren.ac.za]: FAILED! => {"msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpxFXyGG/run-script.retry PLAY RECAP * ovirt1.sanren.ac.za: ok=0changed=0unreachable=0 failed=1 ovirt2.sanren.ac.za: ok=0changed=0unreachable=0 failed=1 ovirt3.sanren.ac.za: ok=0changed=0unreachable=0 failed=1 *Error: Ansible(>= 2.2) is not installed.* *Some of the features might not work if not installed.* I have installed ansible2.4 in all the servers, but the error persists. Is there anything I can do to get rid of this error? -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
Re: [ovirt-users] Ovirt Cluster Setup
Hi, I restarted everything from scratch and followed this article http://blogs-ramesh.blogspot.co.za/2016/01/ovirt-and-gluster-hyperconvergence.html . Thanks for your quick response Kasturi and Sahina On Wed, Feb 21, 2018 at 8:54 AM, Kasturi Narra wrote: > Hello sakhi, > > Can you please let us know what is the script it is failing > at ? > > Thanks > kasturi > > On Tue, Feb 20, 2018 at 1:05 PM, Sakhi Hadebe wrote: > >> I have 3 Dell R515 servers all installed with centOS 7, and trying to >> setup an oVirt Cluster. >> >> Disks configurations: >> 2 x 1TB - Raid1 - OS Deployment >> 6 x 1TB - Raid 6 - Storage >> >> Memory is 128GB >> >> I am following this documentation https://www.ovir >> t.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >> and I am getting the issue below: >> >> PLAY [gluster_servers] ** >> *** >> >> TASK [Run a shell script] ** >> >> fatal: [ovirt2.sanren.ac.za]: FAILED! => {"msg": "The conditional check >> 'result.rc != 0' failed. The error was: error while evaluating conditional >> (result.rc != 0): 'dict object' has no attribute 'rc'"} >> fatal: [ovirt3.sanren.ac.za]: FAILED! => {"msg": "The conditional check >> 'result.rc != 0' failed. The error was: error while evaluating conditional >> (result.rc != 0): 'dict object' has no attribute 'rc'"} >> fatal: [ovirt1.sanren.ac.za]: FAILED! => {"msg": "The conditional check >> 'result.rc != 0' failed. The error was: error while evaluating conditional >> (result.rc != 0): 'dict object' has no attribute 'rc'"} >> to retry, use: --limit @/tmp/tmpxFXyGG/run-script.retry >> >> PLAY RECAP >> * >> ovirt1.sanren.ac.za: ok=0changed=0unreachable=0 >> failed=1 >> ovirt2.sanren.ac.za: ok=0changed=0unreachable=0 >> failed=1 >> ovirt3.sanren.ac.za: ok=0changed=0unreachable=0 >> failed=1 >> >> *Error: Ansible(>= 2.2) is not installed.* >> *Some of the features might not work if not installed.* >> >> >> I have installed ansible2.4 in all the servers, but the error persists. >> >> Is there anything I can do to get rid of this error? >> -- >> Regards, >> Sakhi Hadebe >> >> Engineer: South African National Research Network (SANReN)Competency Area, >> Meraka, CSIR >> >> Tel: +27 12 841 2308 <+27128414213> >> Fax: +27 12 841 4223 <+27128414223> >> Cell: +27 71 331 9622 <+27823034657> >> Email: sa...@sanren.ac.za >> >> >> ___ >> Users mailing list >> Users@ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Failed to verify Power Management configuration for Host xxxxxxxxx
Hi, I am installing the ovirt cluster of 3 server machines. With the hosted engine vm running on one of the servers. I have been struggling to enable Power management. On the Administration it shows enabled and not editable. The error it show is: Error while executing action: ovirt-host.example.co.za: - Cannot edit Host. Power Management is enabled for Host but no Agent type selected. Please assist. -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
[ovirt-users] Hosted Engine Setup Erro
Hi, I am new to ansible and tring to deploy an ovrt cluster with gluster. I am following this documentation https://www.ovirt.org/blog/2018/02/up-and-running-with-ovirt-4-2-and-gluster-storage/, although the screenshots are not exactly the same. Gluster successfully deployed. Below is what is insatlled on my OvirtNodes: 1. ansible --version ansible 2.5.3 config file = /etc/ansible/ansible.cfg 2. glusterfs 3.12.9 3. oVirtNode 4.2.3 4. CentOS Linux release 7.4.1708 (Core) During the hosted engine setup it throws the ERROR below: [ INFO ] TASK [Prepare CIDR for "virbr0"] [ ERROR ] fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'ipv4'\n\nThe error appears to have been in '/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.yml': line 50, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tags: [ 'skip_ansible_lint' ]\n- name: Prepare CIDR for \"{{ virbr_default }}\"\n ^ here\nWe could be wrong, but this one looks like it might be an issue with\nmissing quotes. Always quote template expression brackets when they\nstart a value. For instance:\n\nwith_items:\n - {{ foo }}\n\nShould be written as:\n\nwith_items:\n - \"{{ foo }}\"\n"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook I have tried to use quotes on line 50 in the bootstrap_local_vm.yml file, it didn't work. Please help, I have been stuck on this almost the whole day. -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org
[ovirt-users] Engine Setup Error
Hi, We are deploying the hosted engine on oVirt-Node-4.2.3.1 using the command "hosted-engine --deploy". After providing answers it runs the ansible script and hit the Error when creating glusterfs storage domain. Attached the screenshot of the ERROR. Please help. ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/XN5ML4VTDL6BDAAFFBGFXI5KEEZDMGNK/
[ovirt-users] Network Setup Error
Hi, I have successfully installe oVirtNode 4.2 and gluster storage on CentOS 7. I am now struggling to deploy the hosted engine. [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Unable to create bridge virbr0: Package not installed"} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook What is the right package for bridging in centOS 7? -- Regards, Sakhi Hadebe ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/27OU3E2KX7AKOPXB2CSCVYJEUEDZJZ4J/
[ovirt-users] Engine Error
Hi, I have just re-installed centOS 7 in 3 servers and have configured gluster volumes following this documentation: https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, But I have installed http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm package. Hosted-engine --deploy is failing with this error: "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", "--disk", "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77-8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", "stderr": "ERRORunsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor\nDomain installation does not appear to have been successful.\nIf it was, you can restart your domain by running:\n virsh --connect qemu:///system start HostedEngineLocal\notherwise, please restart your installation.", "stderr_lines": ["ERRORunsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain installation does not appear to have been successful.", "If it was, you can restart your domain by running:", " virsh --connect qemu:///system start HostedEngineLocal", "otherwise, please restart your installation."], "stdout": "\nStarting install...", "stdout_lines": ["", "Starting install..."]} I added the root user to the kvm group. but it ddn't work. Can you please help me out. I have been struggling to deploy the hosted engine. -- Regards, Sakhi ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/QBGWHCLKVBD6U2KDDBPVG4WBB5RWNPDX/
[ovirt-users] Re: Network Setup Error
Hi Simone, Sorry I just did fresh install and have followed this documentation: https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/. I have configured the gluster volume now stuck at deploying the engine:-( Stuck on the error below: "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4", "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", "--disk", "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77- 8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", "--import", "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", "stderr": "ERRORunsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor\nDomain installation does not appear to have been successful.\nIf it was, you can restart your domain by running:\n virsh --connect qemu:///system start HostedEngineLocal\notherwise, please restart your installation.", "stderr_lines": ["ERRORunsupported configuration: CPU mode 'custom' for x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain installation does not appear to have been successful.", "If it was, you can restart your domain by running:", " virsh --connect qemu:///system start HostedEngineLocal", "otherwise, please restart your installation."], "stdout": "\nStarting install...", "stdout_lines": ["", "Starting install..."]} On Tue, Jul 10, 2018 at 4:18 PM, Simone Tiraboschi wrote: > Hi, > can you please attach the log file from /var/log/ovirt-hosted-engine-setup/ > ? > > On Tue, Jul 10, 2018 at 1:26 PM Sakhi Hadebe wrote: > >> Hi, >> >> I have successfully installe oVirtNode 4.2 and gluster storage on CentOS >> 7. I am now struggling to deploy the hosted engine. >> >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": >> "Unable to create bridge virbr0: Package not installed"} >> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >> ansible-playbook >> >> What is the right package for bridging in centOS 7? >> -- >> Regards, >> Sakhi Hadebe >> >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/community/about/community- >> guidelines/ >> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ >> message/27OU3E2KX7AKOPXB2CSCVYJEUEDZJZ4J/ >> > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/JN2AKP2HMNLEUA4H642V3HA6CKYWHZ3V/
[ovirt-users] Re: Engine Error
Hi, I did not select any CPU architecture. It doenst gove me the option to select one. It only states the number of virtual CPUs and the memory for the engine VM. Looking at the documentation of installing ovirt-release36.rpmit does allow you to select te CPU, but not when installing ovirt-release42.rpm On Tuesday, July 10, 2018, Alastair Neil wrote: > what did you select as your CPU architecture when you created the > cluster? It looks like the VM is trying to use a CPU type of "Custom", how > many nodes in your cluster? I suggest you specify the lowest common > denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU > architecture of the cluster.. > > On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe wrote: > >> Hi, >> >> I have just re-installed centOS 7 in 3 servers and have configured >> gluster volumes following this documentation: https://www.ovirt.org/blog/ >> 2016/03/up-and-running-with-ovirt-3-6/, But I have installed >> >> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >> >> package. >> Hosted-engine --deploy is failing with this error: >> >> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4", >> "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >> "--disk", "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77- >> 8753-6965e4b8c2e7/d494577e-027a-4209-895b-6132e6fc6b9a", "--import", >> "--disk", "path=/var/tmp/localvm0nnJH9/seed.iso,device=cdrom", >> "--noautoconsole", "--rng", "/dev/random", "--graphics", "vnc", "--video", >> "vga", "--sound", "none", "--controller", "usb,model=none", "--memballoon", >> "none", "--boot", "hd,menu=off", "--clock", "kvmclock_present=yes"], >> "delta": "0:00:00.979003", "end": "2018-07-10 17:55:11.308555", "msg": >> "non-zero return code", "rc": 1, "start": "2018-07-10 17:55:10.329552", >> "stderr": "ERRORunsupported configuration: CPU mode 'custom' for x86_64 >> kvm domain on x86_64 host is not supported by hypervisor\nDomain >> installation does not appear to have been successful.\nIf it was, you can >> restart your domain by running:\n virsh --connect qemu:///system start >> HostedEngineLocal\notherwise, please restart your installation.", >> "stderr_lines": ["ERRORunsupported configuration: CPU mode 'custom' for >> x86_64 kvm domain on x86_64 host is not supported by hypervisor", "Domain >> installation does not appear to have been successful.", "If it was, you can >> restart your domain by running:", " virsh --connect qemu:///system start >> HostedEngineLocal", "otherwise, please restart your installation."], >> "stdout": "\nStarting install...", "stdout_lines": ["", "Starting >> install..."]} >> >> I added the root user to the kvm group. but it ddn't work. >> >> Can you please help me out. I have been struggling to deploy the hosted >> engine. >> >> -- >> Regards, >> Sakhi >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/community/about/community- >> guidelines/ >> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/ >> message/QBGWHCLKVBD6U2KDDBPVG4WBB5RWNPDX/ >> > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/TKH467XJE6IQIJENPPUT7URB4ZFN7YJQ/
[ovirt-users] Re: Engine Setup Error
On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe wrote: > Hi, > > Below are the versions of packages installed. Please find the logs > attached. > Qemu: > ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch > libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 > qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 > qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 > qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 > > Libvirt installed packages: > libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 > libvirt-libs-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 > libvirt-3.9.0-14.el7_5.6.x86_64 > libvirt-python-3.9.0-1.el7.x86_64 > libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 > libvirt-client-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 > libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 > libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 > > Virt-manager: > virt-manager-common-1.4.3-3.el7.noarch > > oVirt: > [root@localhost network-scripts]# rpm -qa | grep ovirt > ovirt-setup-lib-1.1.4-1.el7.centos.noarch > cockpit-ovirt-dashboard-0.11.28-1.el7.noarch > ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch > ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch > ovirt-host-dependencies-4.2.3-1.el7.x86_64 > ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch > ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch > ovirt-host-4.2.3-1.el7.x86_64 > python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 > ovirt-host-deploy-1.7.4-1.el7.noarch > cockpit-machines-ovirt-169-1.el7.noarch > ovirt-hosted-engine-ha-2.2.14-1.el7.noarch > ovirt-vmconsole-1.0.5-4.el7.centos.noarch > ovirt-provider-ovn-driver-1.2.11-1.el7.noarch > ovirt-engine-appliance-4.2-20180626.1.el7.noarch > ovirt-release42-4.2.4-1.el7.noarch > ovirt-hosted-engine-setup-2.2.22.1-1.el7.noarch > > > > > > > On Wed, Jul 11, 2018 at 6:48 AM, Yedidyah Bar David > wrote: > >> On Tue, Jul 10, 2018 at 11:32 PM, Sakhi Hadebe >> wrote: >> >>> Hi, >>> >>> I did not select any CPU architecture. It doenst gove me the option to >>> select one. It only states the number of virtual CPUs and the memory for >>> the engine VM. >>> >>> Looking at the documentation of installing ovirt-release36.rpmit >>> does allow you to select te CPU, but not when installing ovirt-release42.rpm >>> >>> On Tuesday, July 10, 2018, Alastair Neil wrote: >>> >>>> what did you select as your CPU architecture when you created the >>>> cluster? It looks like the VM is trying to use a CPU type of "Custom", how >>>> many nodes in your cluster? I suggest you specify the lowest common >>>> denominator of CPU architecture (e.g. Sandybridge) of the nodes as the CPU >>>> architecture of the cluster.. >>>> >>>> On Tue, 10 Jul 2018 at 12:01, Sakhi Hadebe wrote: >>>> >>>>> Hi, >>>>> >>>>> I have just re-installed centOS 7 in 3 servers and have configured >>>>> gluster volumes following this documentation: >>>>> https://www.ovirt.org/blog/2016/03/up-and-running-with-ovirt-3-6/, >>>>> But I have installed >>>>> >>>>> http://resources.ovirt.org/pub/yum-repo/ovirt-release42.rpm >>>>> >>>>> package. >>>>> Hosted-engine --deploy is failing with this error: >>>>> >>>>> "rhel7", "--virt-type", "kvm", "--memory", "16384", "--vcpus", "4", >>>>> "--network", "network=default,mac=00:16:3e:09:5e:5d,model=virtio", >>>>> "--disk", "/var/tmp/localvm0nnJH9/images/eacac30d-0304-4c77-8753-6965e >>>&
[ovirt-users] Re: Engine Setup Error
Hi Sahina, Yes the glusterd daemon was not running. I have started it and is able to add a glusterfs storage domain. Thank you so much for your help. Oops! I allocated 50GiB for this storage domain and it requires 60GiB. On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose wrote: > Is glusterd running on the server: goku.sanren.** > There's an error > Failed to get volume info: Command execution failed > error: Connection failed. Please check if gluster daemon is operational > > Please check the volume status using "gluster volume status engine" > > and if all looks ok, attach the mount logs from /var/log/glusterfs > > On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe wrote: > >> Hi, >> >> I have managed to fix the error by enabling the DMA Virtualisation in >> BIOS. I am now hit with a new error: It's failing to add a glusterfs >> storage domain: >> >> [ INFO ] TASK [Add glusterfs storage domain] >> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is >> "[Problem while trying to mount target]". HTTP response code is 400. >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault >> reason is \"Operation Failed\". Fault detail is \"[Problem while trying to >> mount target]\". HTTP response code is 400."} >> Please specify the storage you would like to use (glusterfs, >> iscsi, fc, nfs)[nfs]: >> >> Attached are vdsm and engine log files. >> >> >> >> >> >> On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe wrote: >> >>> >>> >>> On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe >>> wrote: >>> >>>> Hi, >>>> >>>> Below are the versions of packages installed. Please find the logs >>>> attached. >>>> Qemu: >>>> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch >>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 >>>> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 >>>> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 >>>> >>>> Libvirt installed packages: >>>> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-libs-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-python-3.9.0-1.el7.x86_64 >>>> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-client-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 >>>> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>> >>>> Virt-manager: >>>> virt-manager-common-1.4.3-3.el7.noarch >>>> >>>> oVirt: >>>> [root@localhost network-scripts]# rpm -qa | grep ovirt >>>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>>> cockpit-ovirt-dashboard-0.11.28-1.el7.noarch >>>> ovirt-imageio-common-1.3.1.2-0.el7.centos.noarch >>>> ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch >>>> ovirt-host-dependencies-4.2.3-1.el7.x86_64 >>>> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >>>> ovirt-imageio-daemon-1.3.1.2-0.el7.centos.noarch >>>> ovirt-host-4.2.3-1.el7.x86_64 >>>> python-ovirt-engine-sdk4-4.2.7-2.el7.x86_64 >>>> ovirt-host-deploy-1.7.4-1.el7.noarch >>>> cockpit-machines-ovirt-169-1.el7.noarch >>
[ovirt-users] Re: Engine Setup Error
Thank you all for your help, I have managed to deploy the engine successfully. It was a quiet a lesson. On Wed, Jul 11, 2018 at 11:55 AM, Sakhi Hadebe wrote: > Hi Sahina, > > Yes the glusterd daemon was not running. I have started it and is able to > add a glusterfs storage domain. Thank you so much for your help. > > Oops! I allocated 50GiB for this storage domain and it requires 60GiB. > > On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose wrote: > >> Is glusterd running on the server: goku.sanren.** >> There's an error >> Failed to get volume info: Command execution failed >> error: Connection failed. Please check if gluster daemon is operational >> >> Please check the volume status using "gluster volume status engine" >> >> and if all looks ok, attach the mount logs from /var/log/glusterfs >> >> On Wed, Jul 11, 2018 at 1:57 PM, Sakhi Hadebe wrote: >> >>> Hi, >>> >>> I have managed to fix the error by enabling the DMA Virtualisation in >>> BIOS. I am now hit with a new error: It's failing to add a glusterfs >>> storage domain: >>> >>> [ INFO ] TASK [Add glusterfs storage domain] >>> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is >>> "[Problem while trying to mount target]". HTTP response code is 400. >>> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": >>> "Fault reason is \"Operation Failed\". Fault detail is \"[Problem while >>> trying to mount target]\". HTTP response code is 400."} >>> Please specify the storage you would like to use (glusterfs, >>> iscsi, fc, nfs)[nfs]: >>> >>> Attached are vdsm and engine log files. >>> >>> >>> >>> >>> >>> On Wed, Jul 11, 2018 at 9:57 AM, Sakhi Hadebe >>> wrote: >>> >>>> >>>> >>>> On Wed, Jul 11, 2018 at 9:33 AM, Sakhi Hadebe >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> Below are the versions of packages installed. Please find the logs >>>>> attached. >>>>> Qemu: >>>>> ipxe-roms-qemu-20170123-1.git4e85b27.el7_4.1.noarch >>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>>> qemu-img-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>> qemu-kvm-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>> qemu-kvm-common-ev-2.10.0-21.el7_5.4.1.x86_64 >>>>> >>>>> Libvirt installed packages: >>>>> libvirt-daemon-driver-storage-disk-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-config-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-iscsi-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-network-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-libs-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-secret-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-core-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-qemu-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-python-3.9.0-1.el7.x86_64 >>>>> libvirt-daemon-driver-nodedev-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-rbd-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-scsi-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-config-network-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-client-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-kvm-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-logical-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-interface-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-lock-sanlock-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-storage-mpath-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-lxc-3.9.0-14.el7_5.6.x86_64 >>>>> libvirt-daemon-driver-nwfilter-3.9.0-14.el7_5.6.x86_64 >>>>> >>>>> Virt-manager: >>>>> virt-manager-common-1.4.3-3.el7.noarch >>>>> >>>>> oVirt: >>>>> [root@localhost network-scripts]# rpm -qa | grep ovirt >>>>> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >>>>> cockpit-ovirt-dashboard-0.11.28-1.e
[ovirt-users] Gluster Deployment hangs on enabling or disabling chronyd service
Hi, Why is gluster deployment hangs on enabling or disabling the chronyd service? I have enabled passwordless ssh to access itself and other two nodes. What would be the solution to get it pass this stage? On Fri, Jul 13, 2018 at 10:41 AM, Sakhi Hadebe wrote: > Hi, > > We are running the following setup: > > ovirt-engine: > - CentOS Linux release 7.5.1804 (Core) > - ovirt-engine-4.2.4.5-1.el7.noarch > > node (trying to add): > - CentOS Linux release 7.5.1804 (Core) > - vdsm-4.20.32-1.el7.x86_64 > > - ovirt-release42-4.2.4-1.el7.noarch > > > We successfully added the second node to the Cluster. Two nodes that have > been successfully added are running ovirt-node-4.2.4. > > ovirt-node-ng-image-update-placeholder-4.2.4-1.el7.noarch > ovirt-node-ng-nodectl-4.2.0-0.20180626.0.el7.noarch > ovirt-provider-ovn-driver-1.2.11-1.el7.noarch > ovirt-release42-4.2.4-1.el7.noarch > ovirt-release-host-node-4.2.4-1.el7.noarch > > While adding a new node to our cluster the installations fail. > > I have attached a piece of engine.log and a full ovirt-host-deploy logs of > the failing node from the engine. > Help would be very much appreciated. > > > -- > Regards, > Sakhi Hadebe > > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/KBMRRUJWHZZGWBKFLOC5MXYQWGNUATSN/
[ovirt-users] Re: Gluster Deployment hangs on enabling or disabling chronyd service
Hi, The problem is solved. I found that the problem was with Ansible, it couldn't ssh (SSH Error) to one of the nodes. With that fixed, it installed the oVirt successfully. Thank you for your support On Tue, Jul 17, 2018 at 2:05 PM, Gobinda Das wrote: > Hi Sakhi, > Can you please provide engine log and ovirt-host-deploy log ? You > mentioned that, you have attached log but unfortunately I can't find the > attachment. > > On Tue, Jul 17, 2018 at 3:12 PM, Sakhi Hadebe wrote: > >> Hi, >> >> Why is gluster deployment hangs on enabling or disabling the chronyd >> service? I have enabled passwordless ssh to access itself and other two >> nodes. >> >> What would be the solution to get it pass this stage? >> >> On Fri, Jul 13, 2018 at 10:41 AM, Sakhi Hadebe >> wrote: >> >>> Hi, >>> >>> We are running the following setup: >>> >>> ovirt-engine: >>> - CentOS Linux release 7.5.1804 (Core) >>> - ovirt-engine-4.2.4.5-1.el7.noarch >>> >>> node (trying to add): >>> - CentOS Linux release 7.5.1804 (Core) >>> - vdsm-4.20.32-1.el7.x86_64 >>> >>> - ovirt-release42-4.2.4-1.el7.noarch >>> >>> >>> We successfully added the second node to the Cluster. Two nodes that have >>> been successfully added are running ovirt-node-4.2.4. >>> >>> ovirt-node-ng-image-update-placeholder-4.2.4-1.el7.noarch >>> ovirt-node-ng-nodectl-4.2.0-0.20180626.0.el7.noarch >>> ovirt-provider-ovn-driver-1.2.11-1.el7.noarchovirt-release42-4.2.4-1.el7.noarch >>> ovirt-release-host-node-4.2.4-1.el7.noarch >>> >>> While adding a new node to our cluster the installations fail. >>> >>> I have attached a piece of engine.log and a full ovirt-host-deploy logs of >>> the failing node from the engine. >>> Help would be very much appreciated. >>> >>> >>> -- >>> Regards, >>> Sakhi Hadebe >>> >>> >> >> >> -- >> Regards, >> Sakhi Hadebe >> >> Engineer: South African National Research Network (SANReN)Competency Area, >> Meraka, CSIR >> >> Tel: +27 12 841 2308 <+27128414213> >> Fax: +27 12 841 4223 <+27128414223> >> Cell: +27 71 331 9622 <+27823034657> >> Email: sa...@sanren.ac.za >> >> >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: https://www.ovirt.org/communit >> y/about/community-guidelines/ >> List Archives: https://lists.ovirt.org/archiv >> es/list/users@ovirt.org/message/KBMRRUJWHZZGWBKFLOC5MXYQWGNUATSN/ >> >> > > > -- > Thanks, > Gobinda > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/J3ZOTF5Y7WC52JRZ6NCQNONWGIXFTTEH/
[ovirt-users] Switching from Public Network to Private Network Space
17748713 (16.9 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 14842 bytes 4166526 (3.9 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/VENQXKEDH3F62GRR2CHTMPCLSMLVS5RA/
[ovirt-users] Re: Engine Setup Error
Hi Sahina, I am sorry I can't reproduce the error nor access the logs since I did a fresh installed pn nodes. However now I can't even react that far because the engine deployment fails to start the host up: [ INFO ] TASK [Wait for the host to be up] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], "auto_numa_status": "unknown", "certificate": {"organization": "sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, "cluster": {"href": "/ovirt-engine/api/clusters/1ca368cc-b052-11e8-b7de-00163e008187", "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": {"enabled": false}, "devices": [], "external_network_provider_configurations": [], "external_status": "ok", "hardware_information": {"supported_rng_sources": []}, "hooks": [], "href": "/ovirt-engine/api/hosts/1c575995-70b1-43f7-b348-4a9788e070cd", "id": "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], "kdump_status": "unknown", "ksm": {"enabled": false}, "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", "port": 22}, "statistics": [], "status": "install_failed", "storage_connection_extensions": [], "summary": {"total": 0}, "tags": [], "transparent_huge_pages": {"enabled": false}, "type": "ovirt_node", "unmanaged_networks": [], "update_available": false}]}, "attempts": 120, "changed": false} Please help. On Mon, Sep 3, 2018 at 1:34 PM, Sahina Bose wrote: > > > On Wed, Aug 29, 2018 at 8:39 PM, Sakhi Hadebe wrote: > >> Hi, >> >> I am sorry to bother you again. >> >> I am trying to deploy an oVirt engine for oVirtNode-4.2.5.1. I get the >> same error I encountered before: >> >> [ INFO ] TASK [Add glusterfs storage domain] >> [ ERROR ] Error: Fault reason is "Operation Failed". Fault detail is >> "[Problem while trying to mount target]". HTTP response code is 400. >> [ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Fault >> reason is \"Operation Failed\". Fault detail is \"[Problem while trying >> to mount target]\". HTTP response code is 400."} >> Please specify the storage you would like to use (glusterfs, >> iscsi, fc, nfs)[nfs]: >> >> The glusterd daemon is running. >> > > mounting 172.16.4.18:/engine at /rhev/data-center/mnt/ > glusterSD/172.16.4.18:_engine (mount:204) > 2018-08-29 16:47:28,846+0200 ERROR (jsonrpc/3) [storage.HSM] Could not > connect to storageServer (hsm:2398) > > Can you try to see if you are able to mount 172.16.4.18:/engine on the > server you're deploying Hosted Engine using "mount -t glusterfs > 172.16.4.18:/engine > /mnt/test" > > >> During the deployment of the engine it sets the engine entry in the >> /etc/hosts file with the IP Address of 192.168.124.* which it gets form the >> virbr0 bridge interface. I stopped the bridge and deleted it, but still >> giving the same error. Not sure what causes it to use that interface. >> Please help! >> >> But I give the engine an IP of 192.168.1.10 same subnet as my gateway and >> my ovirtmgmt bridge. Attached is the ifconfig output of my Node, engine.log >> and vdsm.log. >> >> Your assistance is always appreciated >> >> >> >> >> >> On Wed, Jul 11, 2018 at 11:47 AM, Sahina Bose wrote: >> >>> Is glusterd running on the server: goku.sanren.*
[ovirt-users] Re: Engine Setup Error
Hi All, The host deploy logs are showing the below errors: [root@garlic engine-logs-2018-09-05T08:48:22Z]# cat /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09-05T08\:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy-20180905103605-garlic.sanren.ac.za-543b536b.log | grep -i error 2018-09-05 10:35:46,909+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:47,383+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread @identity.py:145 - Reload of consumer identity cert /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such file or directory: '/etc/pki/consumer/key.pem' 2018-09-05 10:35:48,245+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'False' Job for ovirt-imageio-daemon.service failed because the control process exited with error code. See "systemctl status ovirt-imageio-daemon.service" and "journalctl -xe" for details. RuntimeError: Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,098+0200 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed to start service 'ovirt-imageio-daemon' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,099+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), )]' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/error=bool:'True' 2018-09-05 10:36:05,106+0200 DEBUG otopi.context context.dumpEnvironment:869 ENV BASE/exceptionInfo=list:'[(, RuntimeError("Failed to start service 'ovirt-imageio-daemon'",), )]' I coudn't find anything helpful from the internet. On Tue, Sep 4, 2018 at 6:46 PM, Simone Tiraboschi wrote: > > > On Tue, Sep 4, 2018 at 6:07 PM Sakhi Hadebe wrote: > >> Hi Sahina, >> >> I am sorry I can't reproduce the error nor access the logs since I did a >> fresh installed pn nodes. However now I can't even react that far because >> the engine deployment fails to start the host up: >> >> >> [ INFO ] TASK [Wait for the host to be up] >> [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": >> {"ovirt_hosts": [{"address": "goku.sanren.ac.za", "affinity_labels": [], >> "auto_numa_status": "unknown", "certificate": {"organization": " >> sanren.ac.za", "subject": "O=sanren.ac.za,CN=goku.sanren.ac.za"}, >> "cluster": {"href": "/ovirt-engine/api/clusters/1ca368cc-b052-11e8-b7de- >> 00163e008187", "id": "1ca368cc-b052-11e8-b7de-00163e008187"}, "comment": >> "", "cpu": {"speed": 0.0, "topology": {}}, "device_passthrough": >> {"enabled": false}, "devices": [], >> "external_network_provider_configurations": >> [], "external_status": "ok", "hardware_information": >> {"supported_rng_sources": []}, "hooks": [], "href": >> "/ovirt-engine/api/hosts/1c575995-70b1-43f7-b348-4a9788e070cd", "id": >> "1c575995-70b1-43f7-b348-4a9788e070cd", "katello_errata": [], >> "kdump_status": "unknown", "ksm": {"enabled": false}, >> "max_scheduling_memory": 0, "memory": 0, "name": "goku.sanren.ac.za", >> "network_attachments": [], "nics": [], "numa_nodes": [], "numa_supported": >> false, "os": {"custom_kernel_cmdline": ""}, "permissions": [], "port": >> 54321, "power_management": {"automatic_pm_enabled": true, "enabled": false, >> "kdump_detection": true, "pm_proxies": []}, "protocol": "stomp", >> "se_linux": {}, "spm": {"priority": 5, "status": "none"}, "ssh": >> {"fingerprint": "SHA256:B3/PDH551EFid93fm6PoRryi6/cXuVE8yNgiiiROh84", >> "port": 22}, "stati
[ovirt-users] Re: Engine Setup Error
# systemctl status ovirt-imageio-daemon.service ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 1min 58s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE) Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling ...art. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon...vice Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Hint: Some lines were ellipsized, use -l to show in full. [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min 9s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE) Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Output of: On Wed, Sep 5, 2018 at 11:35 AM, Simone Tiraboschi wrote: > > > On Wed, Sep 5, 2018 at 11:10 AM Sakhi Hadebe wrote: > >> Hi All, >> >> The host deploy logs are showing the below errors: >> >> [root@garlic engine-logs-2018-09-05T08:48:22Z]# cat >> /var/log/ovirt-hosted-engine-setup/engine-logs-2018-09- >> 05T08\:34\:55Z/ovirt-engine/host-deploy/ovirt-host-deploy- >> 20180905103605-garlic.sanren.ac.za-543b536b.log | grep -i error >> 2018-09-05 10:35:46,909+0200 DEBUG otopi.context >> context.dumpEnvironment:869 ENV BASE/error=bool:'False' >> 2018-09-05 10:35:47,116 [ERROR] __main__.py:8011:MainThread >> @identity.py:145 - Reload of consumer identity cert >> /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such >> file or directory: '/etc/pki/consumer/key.pem' >> 2018-09-05 10:35:47,383+0200 DEBUG otopi.context >> context.dumpEnvironment:869 ENV BASE/error=bool:'False' >> 2018-09-05 10:35:47,593 [ERROR] __main__.py:8011:MainThread >> @identity.py:145 - Reload of consumer identity cert >> /etc/pki/consumer/cert.pem raised an exception with msg: [Errno 2] No such >> file or directory: '/etc/pki/consumer/key.pem' >> 2018-09-05 10:35:48,245+0200 DEBUG otopi.context >> context.dumpEnvironment:869 ENV BASE/error=bool:'False' >> Job for ovirt-imageio-daemon.service failed because the control process >> exited with error code. See "systemctl status ovirt-imageio-daemon.service" >> and "journalctl -xe" for details. >> RuntimeError: Failed to start service 'ovirt-imageio-daemon' >> 2018-09-05 10:36:05,098+0200 ERROR otopi.context >> context._executeMethod:152 Failed to execute stage 'Closing up': Failed to >> start service 'ovirt-imageio-daemon' >> 2018-09-05 10:36:05,099+0200 DEBUG otopi.context >> context.dumpEnvironment:869 ENV BASE/error=bool:'True' >> 2018-09-05 10:36:05,099+0200 DEBUG otopi.context >> co
[ovirt-users] Re: Engine Setup Error
Sorry, I mistakenly send the email: Below is the output of: [root@glustermount ~]# systemctl status ovirt-imageio-daemon.service -l ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; disabled; vendor preset: disabled) Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 SAST; 19h ago Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 2min 9s ago ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not met Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, status=1/FAILURE) Main PID: 11345 (code=exited, status=1/FAILURE) Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart. Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. [root@glustermount ~]# journalctl -xe -u ovirt-imageio-daemon.service Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/handlers.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: BaseRotatingHandler.__init__(self, filename, mode Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/handlers.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: logging.FileHandler.__init__(self, filename, mode Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/__init__.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: StreamHandler.__init__(self, self._open()) Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: File "/usr/lib64/python2.7/logging/__init__.py", Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: stream = open(self.baseFilename, self.mode) Sep 04 16:55:16 glustermount.goku ovirt-imageio-daemon[11345]: IOError: [Errno 2] No such file or directory: '/v Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service: main process exited, code=exited, st Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. -- Subject: Unit ovirt-imageio-daemon.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ovirt-imageio-daemon.service has failed. -- -- The result is failed. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service holdoff time over, scheduling restart Sep 04 16:55:16 glustermount.goku systemd[1]: start request repeated too quickly for ovirt-imageio-daemon.servic Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt ImageIO Daemon. -- Subject: Unit ovirt-imageio-daemon.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit ovirt-imageio-daemon.service has failed. -- -- The result is failed. Sep 04 16:55:16 glustermount.goku systemd[1]: Unit ovirt-imageio-daemon.service entered failed state. Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service failed. On Wed, Sep 5, 2018 at 12:01 PM, Sakhi Hadebe wrote: > # systemctl status ovirt-imageio-daemon.service > ● ovirt-imageio-daemon.service - oVirt ImageIO Daemon >Loaded: loaded (/usr/lib/systemd/system/ovirt-imageio-daemon.service; > disabled; vendor preset: disabled) >Active: failed (Result: start-limit) since Tue 2018-09-04 16:55:16 > SAST; 19h ago > Condition: start condition failed at Wed 2018-09-05 11:56:46 SAST; 1min > 58s ago >ConditionPathExists=/etc/pki/vdsm/certs/vdsmcert.pem was not > met > Process: 11345 ExecStart=/usr/bin/ovirt-imageio-daemon (code=exited, > status=1/FAILURE) > Main PID: 11345 (code=exited, status=1/FAILURE) > > Sep 04 16:55:16 glustermount.goku systemd[1]: Failed to start oVirt > ImageIO Daemon. > Sep 04 16:55:16 glustermount.goku systemd[1]: Unit > ovirt-imageio-daemon.service entered failed state. > Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service > failed. > Sep 04 16:55:16 glustermount.goku systemd[1]: ovirt-imageio-daemon.service &g
[ovirt-users] Re: Engine Setup Error
HI, I re-installed the cluster and got the hosted engine deployed succesfully. Thank you for your assistance On Thu, Sep 6, 2018 at 1:54 PM, Sakhi Hadebe wrote: > Hi Simone, > > Yes the ownership of /var/log/ovirt-imageio-daemon is vdsm:kvm, 755. > > Below is the version of the ovirt packages currently installed on my oVirt > Nodes: > > ovirt-vmconsole-1.0.5-4.el7.centos.noarch > ovirt-provider-ovn-driver-1.2.14-1.el7.noarch > ovirt-release-host-node-4.2.6-1.el7.noarch > ovirt-hosted-engine-setup-2.2.26-1.el7.noarch > ovirt-node-ng-nodectl-4.2.0-0.20180903.0.el7.noarch > ovirt-release42-4.2.6-1.el7.noarch > ovirt-imageio-common-1.4.4-0.el7.x86_64 > python-ovirt-engine-sdk4-4.2.8-2.el7.x86_64 > *ovirt-imageio-daemon-1.4.4-0.el7.noarch* > *ovirt-hosted-engine-ha-2.2.16-1.el7.noarch* > ovirt-host-4.2.3-1.el7.x86_64 > ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch > ovirt-host-deploy-1.7.4-1.el7.noarch > cockpit-machines-ovirt-172-2.el7.centos.noarch > ovirt-host-dependencies-4.2.3-1.el7.x86_64 > ovirt-engine-appliance-4.2-20180903.1.el7.noarch > ovirt-setup-lib-1.1.5-1.el7.noarch > cockpit-ovirt-dashboard-0.11.33-1.el7.noarch > ovirt-vmconsole-host-1.0.5-4.el7.centos.noarch > ovirt-node-ng-image-update-placeholder-4.2.6-1.el7.noarch > > I updated ovirt packages. and the ovirt-imageio-daemon is now running. > > ovirt-ha-agent is failing: *Failed to start monitor agents. *The engine > was successfully deployed but can't access it as the ovirt-ha-agent is not > running with the error below and log files attached: > > [root@goku ovirt-hosted-engine-ha]# journalctl -xe -u ovirt-ha-agent > Sep 06 13:50:36 goku systemd[1]: ovirt-ha-agent.service holdoff time over, > scheduling restart. > Sep 06 13:50:36 goku systemd[1]: Started oVirt Hosted Engine High > Availability Monitoring Agent. > -- Subject: Unit ovirt-ha-agent.service has finished start-up > -- Defined-By: systemd > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel > -- > -- Unit ovirt-ha-agent.service has finished starting up. > -- > -- The start-up result is done. > Sep 06 13:50:36 goku systemd[1]: Starting oVirt Hosted Engine High > Availability Monitoring Agent... > -- Subject: Unit ovirt-ha-agent.service has begun start-up > -- Defined-By: systemd > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel > -- > -- Unit ovirt-ha-agent.service has begun starting up. > Sep 06 13:50:37 goku ovirt-ha-agent[50395]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngi > Sep 06 13:50:37 goku ovirt-ha-agent[50395]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.agent.Agent ERROR Traceb > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen > return action(he) > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen > return > he.start_monitoring() > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen > self._initialize_broker() > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agen > m.get('options', {})) > File > "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/lib/ > .format(type, options, e)) > RequestError: Failed to start > monitor ping, options {'addr': '192.16 > Sep 06 13:50:37 goku ovirt-ha-agent[50395]: ovirt-ha-agent > ovirt_hosted_engine_ha.agent.agent.Agent ERROR Trying > Sep 06 13:50:37 goku systemd[1]: ovirt-ha-agent.service: main process > exited, code=exited, status=157/n/a > Sep 06 13:50:37 goku systemd[1]: Unit ovirt-ha-agent.service entered > failed state. > Sep 06 13:50:37 goku systemd[1]: ovirt-ha-agent.service failed. > lines 3528-3559/3559 (END) > > > > On Wed, Sep 5, 2018 at 1:53 PM, Simone Tiraboschi > wrote: > >> Can you please check if on your host you have >> /var/log/ovirt-imageio-daemon and its ownership and permissions (it should >> be vdsm:kvm,700)? >> Can you please report which version of ovirt-imageio-daemon you are >> using? >> We had a bug there but it has been fixed long time ago. >> >> >> On Wed, Sep 5, 2018 at 12:04 PM Sakhi Hadebe wrote: >> >>> Sorry, I mistakenly send the email: >>> >>&g
[ovirt-users] Out-of-sync networks can only be detached
Hi, I have a 3-node oVirt cluster. I have configured 2 logical networks: ovirtmgmt and public. Public logical network is attached in only 2 nodes and failing to attach on the 3rd node with the below error Invalid operation, out-of-sync network 'public' can only be detached. Please have been stuck on this for almost the whole day now. How do I fix this error? -- Regards, Sakhi Hadebe ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/PP2NFQXYOVRQG7WMDTP2NK4FSWPQQCOQ/
[ovirt-users] Wrong VLAN ID for Management Network
Hi, Can I just changed the VLAN ID of the ovirtmgmt network in the Admin Portal. IN the OS the network is configured and verified for ovirtmgmt network to have a 21 VLAN ID, but in the Admin Portal, it shows VLAN ID 20, which is configured for the VM network. Can I just changed it in the admin portal? Will the cluster be happy about the changes? -- Regards, Sakhi Hadebe ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/34RMIX2H3CVAZTSIY3QI45RRZOUNAZGJ/
[ovirt-users] Re: Out-of-sync networks can only be detached
Hi Dominik, Thank you for your help. No i can attach all logical networks The problem was the configurations on the Data Center and the hosts. They were not the same. After changing everything to be the same, it worked fine. On Thu, Oct 11, 2018 at 3:52 PM Dominik Holler wrote: > On Thu, 11 Oct 2018 14:00:50 +0200 > Sakhi Hadebe wrote: > > > Hi, > > > > The main issue here is the Logical that cannot be attached to the 3rd > node > > (please check attached AttachErr.png file), while it has attached to the > > other two nodes. > > > > In the first step the configuration of ovirtmgmt on the 3rd node has to > be synchronized. If you hover ovirtmgmt in the "Setup Host 3rd > Networks", you see a list of settings which has to be adjusted. Each > setting can be configured manually or the "Sync All Networks" button in > "Compute >> Hosts >> 3rd >> Network Interfaces" can be used. > But if the hostname (FQDN or IP address) used to add the host to oVirt > does not match the desired configuration, this might not work. > For this reason I recommend to: > 1. Put the host in maintenance state > 2. Remove the host from oVirt > 3. Configure manually em1 (or em2 or bond0) to use the desired IP >address and VLAN for ovirtmgmt network > 4. Add the host to oVirt using this IP address as hostname > 5. Configure all attributes of ovirtmgmt, which are marked as >"Out-of-sync" via oVirt > 6. ovirtmgmt should be synchronized now, so public can now be added to >the same interface (em1, em2 or bond0 if bonded), if public has >another VLAN tag > > > > The Logical Network configurations are the same in all 3 networks., > please > > check the logical configurations in attached scrnshot2.png file > > > > Not really, the configuration shown in scrnshot2.png is not valid, because > it > is not allowed to attach two logical networks with the same VLAN id > (VLAN 20) to the same NIC/bond. > Maybe ovirtmgmt configuration on the host is configured like public > should be? > > > > Here are the logs grepped from engine.log file: > > > > [root@engine ~]# tail -f /var/log/ovirt-engine/engine.log | egrep > > "${OUT_OF_SYNC_VALUES} | goku.sanren.ac.za" > > 2018-10-11 13:52:20,779+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [555cc9ce] Before > > acquiring and wait lock > > > 'EngineLock:{exclusiveLocks='[722583f2-b11a-11e8-9a47-00163e5858df=OVF_UPDATE]', > > sharedLocks=''}' > > 2018-10-11 13:52:20,779+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [555cc9ce] Lock-wait > > acquired to object > > > 'EngineLock:{exclusiveLocks='[722583f2-b11a-11e8-9a47-00163e5858df=OVF_UPDATE]', > > sharedLocks=''}' > > 2018-10-11 13:52:20,780+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [555cc9ce] Running > > command: ProcessOvfUpdateForStoragePoolCommand internal: true. Entities > > affected : ID: 722583f2-b11a-11e8-9a47-00163e5858df Type: StoragePool > > 2018-10-11 13:52:20,784+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [555cc9ce] Attemptin > > to update VM OVFs in Data Center 'Default' > > 2018-10-11 13:52:20,789+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [555cc9ce] > Successfully > > updated VM OVFs in Data Center 'Default' > > 2018-10-11 13:52:20,789+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [555cc9ce] Attemptin > > to update template OVFs in Data Center 'Default' > > 2018-10-11 13:52:20,790+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [555cc9ce] > Successfully > > updated templates OVFs in Data Center 'Default' > > 2018-10-11 13:52:20,790+02 INFO > > > [org.ovirt.engine.core.bll.storage.ovfstore.ProcessOvfUpdateForStoragePoolCommand] > > (EE-ManagedThreadFactory-engineScheduled-Thread-13) [5
[ovirt-users] User Management
Hi, I need some pointers to documentation that will help me to configure users to have access and some rights (access to console, starting and shutting down own VMs) on their VMs on the VM portal. I have created a user using the ovirt-aaa-jdbc-tool utility, but I am unable to login to the cockpit portal. It gives an error: *The user shadebe@internal is not **authorized** to perform login* User details: [root@hostedengine ~]# ovirt-aaa-jdbc-tool query --what=user --pattern="name=s*" -- User shadebe(a35e8e14-d32b-4ff9-89e0-fd090a87146a) -- Namespace: * Name: shadebe ID: a35e8e14-d32b-4ff9-89e0-fd090a87146a Display Name: Email: sa...@sanren.ac.za First Name: Sakhi Last Name: Hadebe Department: Title: Description: Account Disabled: false Account Locked: false Account Unlocked At: 1970-01-01 00:00:00Z Account Valid From: 2019-01-16 09:08:48Z Account Valid To: 2219-01-16 09:08:48Z Account Without Password: false Last successful Login At: 2019-01-16 09:32:52Z Last unsuccessful Login At: 2019-01-16 09:32:36Z Password Valid To: 2029-01-16 10:30:00Z Please help. -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/CMVCAGEWP6QDORW7FHAXX4VGJJTNUBGK/
[ovirt-users] Re: User Management
Thank you Lucie, I assigned a role to a user and the user can now log in. Thanks a lot On Tue, Jan 22, 2019 at 9:03 AM Lucie Leistnerova wrote: > Hi Sakhi, > On 1/22/19 7:52 AM, Sakhi Hadebe wrote: > > Hi, > > I need some pointers to documentation that will help me to configure users > to have access and some rights (access to console, starting and shutting > down own VMs) on their VMs on the VM portal. > > I have created a user using the ovirt-aaa-jdbc-tool utility, but I am > unable to login to the cockpit portal. It gives an error: > > *The user shadebe@internal is not **authorized** to perform login* > > This error means that user doesn't have at least UserRole on any object > (could be VM, cluster, DC, ...). > > Roles explanation you can find here > > > https://www.ovirt.org/documentation/admin-guide/chap-Global_Configuration.html > > User details: > > [root@hostedengine ~]# ovirt-aaa-jdbc-tool query --what=user > --pattern="name=s*" > -- User shadebe(a35e8e14-d32b-4ff9-89e0-fd090a87146a) -- > Namespace: * > Name: shadebe > ID: a35e8e14-d32b-4ff9-89e0-fd090a87146a > Display Name: > Email: sa...@sanren.ac.za > First Name: Sakhi > Last Name: Hadebe > Department: > Title: > Description: > Account Disabled: false > Account Locked: false > Account Unlocked At: 1970-01-01 00:00:00Z > Account Valid From: 2019-01-16 09:08:48Z > Account Valid To: 2219-01-16 09:08:48Z > Account Without Password: false > Last successful Login At: 2019-01-16 09:32:52Z > Last unsuccessful Login At: 2019-01-16 09:32:36Z > Password Valid To: 2029-01-16 10:30:00Z > > Please help. > > > > > > -- > Regards, > Sakhi Hadebe > > Engineer: South African National Research Network (SANReN)Competency Area, > Meraka, CSIR > > Tel: +27 12 841 2308 <+27128414213> > Fax: +27 12 841 4223 <+27128414223> > Cell: +27 71 331 9622 <+27823034657> > Email: sa...@sanren.ac.za > > > ___ > Users mailing list -- users@ovirt.org > To unsubscribe send an email to users-le...@ovirt.org > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/CMVCAGEWP6QDORW7FHAXX4VGJJTNUBGK/ > > Best regards. > > -- > Lucie Leistnerova > Quality Engineer, QE Cloud, RHVM > Red Hat EMEA > > IRC: lleistne @ #rhev-qe > > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2SMLRQIHWWD4DKRG3BIRZLOS6RIVYCQ/
[ovirt-users] HostedEngine Unreachable
Hi, Our cluster was running fine, until we moved it to the new network. Looking at the agent.log file, it stills pings the old gateway. Not sure if this is the reason it's failing the liveliness check. Please help. On Thu, Feb 21, 2019 at 4:39 PM Sakhi Hadebe wrote: > Hi, > > I need some help. We had a working ovirt cluster in the testing > environment. We have just moved it to the production environment with the > same network settings. The only thing we changed is the public VLAN. In > production we're using a different subnet. > > The problem is we can't get the HostedEngine Up. It does come up but it > fails the LIVELINESS CHECK and its health status is bad. We can't ping even > ping it. It is on the same subnet as host machines: 192.168.x.x/24: > > *HostedEngine VM status:* > > [root@garlic qemu]# hosted-engine --vm-status > > > --== Host 1 status ==-- > > conf_on_shared_storage : True > Status up-to-date : True > Hostname : goku.sanren.ac.za > Host ID: 1 > Engine status : {"reason": "vm not running on this > host", "health": "bad", "vm": "down", "detail": "unknown"} > Score : 3400 > stopped: False > Local maintenance : False > crc32 : 57b2ece9 > local_conf_timestamp : 8463 > Host timestamp : 8463 > Extra metadata (valid at timestamp): > metadata_parse_version=1 > metadata_feature_version=1 > timestamp=8463 (Thu Feb 21 16:32:29 2019) > host-id=1 > score=3400 > vm_conf_refresh_time=8463 (Thu Feb 21 16:32:29 2019) > conf_on_shared_storage=True > maintenance=False > state=EngineDown > stopped=False > > > --== Host 2 status ==-- > > conf_on_shared_storage : True > Status up-to-date : True > Hostname : garlic.sanren.ac.za > Host ID: 2 > Engine status : {"reason": "failed liveliness check", > "health": "bad", "vm": "up", "detail": "Powering down"} > Score : 3400 > stopped: False > Local maintenance : False > crc32 : 71dc3daf > local_conf_timestamp : 8540 > Host timestamp : 8540 > Extra metadata (valid at timestamp): > metadata_parse_version=1 > metadata_feature_version=1 > timestamp=8540 (Thu Feb 21 16:32:31 2019) > host-id=2 > score=3400 > vm_conf_refresh_time=8540 (Thu Feb 21 16:32:31 2019) > conf_on_shared_storage=True > maintenance=False > state=EngineStop > stopped=False > timeout=Thu Jan 1 04:24:29 1970 > > > --== Host 3 status ==-- > > conf_on_shared_storage : True > Status up-to-date : True > Hostname : gohan.sanren.ac.za > Host ID: 3 > Engine status : {"reason": "vm not running on this > host", "health": "bad", "vm": "down", "detail": "unknown"} > Score : 3400 > stopped: False > Local maintenance : False > crc32 : 49645620 > local_conf_timestamp : 5480 > Host timestamp : 5480 > Extra metadata (valid at timestamp): > metadata_parse_version=1 > metadata_feature_version=1 > timestamp=5480 (Thu Feb 21 16:32:22 2019) > host-id=3 > score=3400 > vm_conf_refresh_time=5480 (Thu Feb 21 16:32:22 2019) > conf_on_shared_storage=True > maintenance=False > state=EngineDown > stopped=False > You have new mail in /var/spool/mail/root > > The service are running but with errors: > *vdsmd.service:* > [root@garlic qemu]# systemctl status vdsmd > ● vdsmd.service - Virtual Desktop Server Manager >Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor > preset: enabled) >Active: active (running) since Thu 2019-02-21 16:12:12 SAST; 3min 31s > ago > Process: 40117 ExecStopPost=/usr/libexec/vdsm/vdsmd_init_common.sh > --post-stop (code=exited, status=0/SUCCESS) &
[ovirt-users] Re: HostedEngine Unreachable
Hi Simone, Thank you for your response. Executing the command below, gives this: [root@ovirt-host]# curl http://$(grep fqdn /etc/ovirt-hosted-engine/hosted-engine.conf | cut -d= -f2)/ovirt-engine/services/health curl: (7) Failed to connect to engine.sanren.ac.za:80; No route to host I tried to enable http traffic on the ovirt-host, but the error persists ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/LK2FBU7PODOKTQONBUAX3UGQ5ZDRI5NA/
[ovirt-users] CLI command to export VMs
Hi, What is the CLI command to export VMs as OVA? -- Regards, Sakhi Hadebe ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/I32KQWNTIY4N6ZXTH6IGND74H2JLWJJ5/
[ovirt-users] Re: CLI command to export VMs
Thank you Hesham, Do execute the script on engine or ovirt hosts? Should I specify the domain of the VM and the name of the host on the values bolded below: # Find the virtual machine: vms_service = connection.system_service().vms_service() vm = vms_service.list(search='name=myvm')[0] vm_service = vms_service.vm_service(vm.id) # Find the host: hosts_service = connection.system_service().hosts_service() host = hosts_service.list(search='name=myhost')[0] On Mon, Mar 25, 2019 at 2:49 PM Hesham Ahmed wrote: > I don't think there is a pre-installed CLI tool for export to OVA, > however you can use this > > https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/export_vm_as_ova.py > > Make sure you change the Engine URL, username, password, VM and Host > values to match your requirements. > > On Mon, Mar 25, 2019 at 3:35 PM Sakhi Hadebe wrote: > > > > Hi, > > > > What is the CLI command to export VMs as OVA? > > > > -- > > Regards, > > Sakhi Hadebe > > ___ > > Users mailing list -- users@ovirt.org > > To unsubscribe send an email to users-le...@ovirt.org > > Privacy Statement: https://www.ovirt.org/site/privacy-policy/ > > oVirt Code of Conduct: > https://www.ovirt.org/community/about/community-guidelines/ > > List Archives: > https://lists.ovirt.org/archives/list/users@ovirt.org/message/I32KQWNTIY4N6ZXTH6IGND74H2JLWJJ5/ > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/T7UUJYCBWX7VPXVB4EGT3H2QCIWGIO5C/
[ovirt-users] HostedEngine cleaned up
Hi, We have a situation where the HostedEngine was cleaned up and the VMs are no longer running. Looking at the logs we can see the drive files as: 2019-03-26T07:42:46.915838Z qemu-kvm: -drive file=/rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685,format=qcow2,if=none,id=drive-ua-b2b872cd-b468-4f14-ae20-555ed823e84b,serial=b2b872cd-b468-4f14-ae20-555ed823e84b,werror=stop,rerror=stop,cache=none,aio=native: 'serial' is deprecated, please use the corresponding option of '-device' instead I assume this is the disk was writing to before it went down. Trying to list the file gives an error and the file is not there; ls -l /rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685 Is there a way we can recover the VM's disk images ? NOTE: No HostedEngine backups -- Regards, Sakhi Hadebe ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FPTPLMY4TC4FVYPU7U44SLEVHVD57VOS/
[ovirt-users] Re: HostedEngine cleaned up
What happened is the engine's root filesystem had filled up. My colleague tried to resize the root lvm. The engine then did not come back. In trying to resolve that he cleaned up the engine and tried to re-install it, no luck in doing that. That brought down all the VMs. All VMs are down. we trying to move them into one of the standalone kvm host. We have been trying to locate the VM disk images, with no luck. According to the of the VM xml configuration file.the disk file is /rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685 Unfortunately we can find it and the solution on teh forum states that we can only find in the associated logical volume, but i think only when teh vm is running. The disk images we have been trying to boot up from are the one's we got from the gluster bricks, but the are far small that real images and can't boot On Thu, Apr 11, 2019 at 6:13 PM Simone Tiraboschi wrote: > > > On Thu, Apr 11, 2019 at 9:46 AM Sakhi Hadebe wrote: > >> Hi, >> >> We have a situation where the HostedEngine was cleaned up and the VMs are >> no longer running. Looking at the logs we can see the drive files as: >> > > Do you have any guess on what really happened? > Are you sure that the disks really disappeared? > > Please notice that the symlinks under > rhev/data-center/mnt/glusterSD/glustermount... > are created on the fly only when needed. > > Are you sure that your host is correctly connecting the gluster storage > domain? > > >> >> 2019-03-26T07:42:46.915838Z qemu-kvm: -drive >> file=/rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685,format=qcow2,if=none,id=drive-ua-b2b872cd-b468-4f14-ae20-555ed823e84b,serial=b2b872cd-b468-4f14-ae20-555ed823e84b,werror=stop,rerror=stop,cache=none,aio=native: >> 'serial' is deprecated, please use the corresponding option of '-device' >> instead >> >> I assume this is the disk was writing to before it went down. Trying to >> list the file gives an error and the file is not there; >> ls -l >> /rhev/data-center/mnt/glusterSD/glustermount.goku:_vmstore/9f8ef3f6-53f2-4b02-8a6b-e171b000b420/images/b2b872cd-b468-4f14-ae20-555ed823e84b/76ed4113-51b6-44fd-a3cd-3bd64bf93685 >> >> Is there a way we can recover the VM's disk images ? >> >> NOTE: No HostedEngine backups >> >> -- >> Regards, >> Sakhi Hadebe >> ___ >> Users mailing list -- users@ovirt.org >> To unsubscribe send an email to users-le...@ovirt.org >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/ >> oVirt Code of Conduct: >> https://www.ovirt.org/community/about/community-guidelines/ >> List Archives: >> https://lists.ovirt.org/archives/list/users@ovirt.org/message/FPTPLMY4TC4FVYPU7U44SLEVHVD57VOS/ >> > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sa...@sanren.ac.za ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/site/privacy-policy/ oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NPI3NISZFN2EZXRX2G2RXN2SKZFSCSP7/