On 07/12/2017 01:43 PM, Simone Marchioni wrote:
Il 11/07/2017 11:23, knarra ha scritto:
On 07/11/2017 01:32 PM, Simone Marchioni wrote:
Il 11/07/2017 07:59, knarra ha scritto:

Hi,

removed partition signatures with wipefs and run deploy again: this time the creation of VG and LV worked correctly. The deployment proceeded until some new errors... :-/


PLAY [gluster_servers] *********************************************************

TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"}
    to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1


PLAY [gluster_servers] *********************************************************

TASK [Start firewalld if not already started] **********************************
ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Add/Delete services to firewalld rules] ********************************** failed: [ha1.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha2.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"} failed: [ha3.domain.it] (item=glusterfs) => {"failed": true, "item": "glusterfs", "msg": "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)"}
    to retry, use: --limit @/tmp/tmp5Dtb2G/firewalld-service-op.retry

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=1    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=1    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=1    changed=0    unreachable=0 failed=1


PLAY [gluster_servers] *********************************************************

TASK [Start firewalld if not already started] **********************************
ok: [ha1.domain.it]
ok: [ha2.domain.it]
ok: [ha3.domain.it]

TASK [Open/Close firewalld ports] **********************************************
changed: [ha1.domain.it] => (item=111/tcp)
changed: [ha2.domain.it] => (item=111/tcp)
changed: [ha3.domain.it] => (item=111/tcp)
changed: [ha1.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=2049/tcp)
changed: [ha1.domain.it] => (item=54321/tcp)
changed: [ha3.domain.it] => (item=2049/tcp)
changed: [ha2.domain.it] => (item=54321/tcp)
changed: [ha1.domain.it] => (item=5900/tcp)
changed: [ha3.domain.it] => (item=54321/tcp)
changed: [ha2.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=5900-6923/tcp)
changed: [ha3.domain.it] => (item=5900/tcp)
changed: [ha1.domain.it] => (item=5666/tcp)
changed: [ha2.domain.it] => (item=5666/tcp)
changed: [ha1.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5900-6923/tcp)
changed: [ha2.domain.it] => (item=16514/tcp)
changed: [ha3.domain.it] => (item=5666/tcp)
changed: [ha3.domain.it] => (item=16514/tcp)

TASK [Reloads the firewall] ****************************************************
changed: [ha1.domain.it]
changed: [ha2.domain.it]
changed: [ha3.domain.it]

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=3    changed=2    unreachable=0 failed=0
ha2.domain.it            : ok=3    changed=2    unreachable=0 failed=0
ha3.domain.it            : ok=3    changed=2    unreachable=0 failed=0


PLAY [gluster_servers] *********************************************************

TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
    to retry, use: --limit @/tmp/tmp5Dtb2G/run-script.retry

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1


PLAY [gluster_servers] *********************************************************

TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003182", "end": "2017-07-10 18:30:51.204235", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.201053", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007698", "end": "2017-07-10 18:30:51.391046", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.383348", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.004120", "end": "2017-07-10 18:30:51.405640", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-10 18:30:51.401520", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}
    to retry, use: --limit @/tmp/tmp5Dtb2G/shell_cmd.retry

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1


PLAY [gluster_servers] *********************************************************

TASK [start/stop/restart/reload services] ************************************** failed: [ha1.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha2.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"} failed: [ha3.domain.it] (item=glusterd) => {"failed": true, "item": "glusterd", "msg": "Could not find the requested service glusterd: host"}
    to retry, use: --limit @/tmp/tmp5Dtb2G/service_management.retry

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1

Ignoring errors...
Ignoring errors...
Ignoring errors...
Ignoring errors...
Ignoring errors...


In start/stop/restart/reload services it complain about "Could not find the requested service glusterd: host". GlusterFS must be preinstalled or not? I simply installed the rpm packages manually BEFORE the deployment:

yum install glusterfs glusterfs-cli glusterfs-libs glusterfs-client-xlators glusterfs-api glusterfs-fuse

but never configured anything.
Looks like it failed to add the 'glusterfs' service using firewalld and can we try again with what Gianluca suggested ?

Can you please install the latest ovirt rpm which will add all the required dependencies and make sure that the following packages are installed before running with gdeploy ?

yum install vdsm-gluster ovirt-hosted-engine-setup gdeploy cockpit-ovirt-dashboard

For firewalld problem "ERROR: Exception caught: org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs' not among existing services Services are defined by port/tcp relationship and named as they are in /etc/services (on most systems)" I haven't touched anything... it's an "out of the box" installation of CentOS 7.3.

Don't know if the following problems - "Run a shell script" and "usermod: group 'gluster' does not exist" - are related to these... maybe the usermod problem.
You could safely ignore this and this has nothing to do with the configuration.

Thank you again.
Simone
_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users



Hi,

reply here to both Gianluca and Kasturi.

Gianluca: I had ovirt-4.1-dependencies.repo enabled, and gluster 3.8 packages, but glusterfs-server was missing in my "yum install" command, so added glusterfs-server to my installation.

Kasturi: packages ovirt-hosted-engine-setup, gdeploy and cockpit-ovirt-dashboard already installed and updated. vdsm-gluster was missing, so added to my installation.
okay, cool.

Rerun deployment and IT WORKED! I can read the message "Succesfully deployed Gluster" with the blue button "Continue to Hosted Engine Deployment". There's a minor glitch in the window: the green "V" in the circle is missing, like there's a missing image (or a wrong path, as I had to remove "ansible" from the grafton-sanity-check.sh path...)
There is a bug for this and it will be fixed soon. Here is the bug id for your reference. https://bugzilla.redhat.com/show_bug.cgi?id=1462082

Although the deployment worked, and the firewalld and gluterfs errors are gone, a couple of errors remains:


AFTER VG/LV CREATION, START/STOP/RELOAD/GLUSTER AND FIREWALLD HANDLING:

PLAY [gluster_servers] *********************************************************

TASK [Run a shell script] ****************************************************** fatal: [ha1.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha2.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ha3.domain.it]: FAILED! => {"failed": true, "msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"}
    to retry, use: --limit @/tmp/tmpJnz4g3/run-script.retry
May be you missed to change the path of the script "/usr/share/ansible/gdeploy/scripts/disable-gluster-hooks.sh" . That is why this failure.

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1


PLAY [gluster_servers] *********************************************************

TASK [Run a command in the shell] ********************************************** failed: [ha1.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003144", "end": "2017-07-12 00:22:46.836832", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.833688", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha2.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.003647", "end": "2017-07-12 00:22:46.895964", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:46.892317", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []} failed: [ha3.domain.it] (item=usermod -a -G gluster qemu) => {"changed": true, "cmd": "usermod -a -G gluster qemu", "delta": "0:00:00.007008", "end": "2017-07-12 00:22:47.016600", "failed": true, "item": "usermod -a -G gluster qemu", "rc": 6, "start": "2017-07-12 00:22:47.009592", "stderr": "usermod: group 'gluster' does not exist", "stderr_lines": ["usermod: group 'gluster' does not exist"], "stdout": "", "stdout_lines": []}
    to retry, use: --limit @/tmp/tmpJnz4g3/shell_cmd.retry

PLAY RECAP *********************************************************************
ha1.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha2.domain.it            : ok=0    changed=0    unreachable=0 failed=1
ha3.domain.it            : ok=0    changed=0    unreachable=0 failed=1
This error can be safely ignored.


These are a problem for my installation or can I ignore them?
You can just manually run the script to disable hooks on all the nodes. Other error you can ignore.

By the way, I'm writing and documenting this process and can prepare a tutorial if someone is interested.

Thank you again for your support: now I'll proceed with the Hosted Engine Deployment.
Good to know that you can now start with Hosted Engine Deployment.

Hi
Simone


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to