Hello friends of OKD

I have tried to install an OKD cluster with the following configuration: 3
masters, 3 infrastructure nodes, 6 GlusterFS, 3 computer nodes.

The idea is to provide storage in GlusterFS with an "Independent Mode" type
configuration as described here.

https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/3.11/html-single/deployment_guide/index#chap-Documentation-Container_on_RHGS

I Ran the Playbooks in that order:

https://docs.openshift.com/container-platform/3.11/install/running_install.html#running-the-advanced-installation-rpm

In the step:

ansible-playbook -i /etc/ansible/hosts
/usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

I get:

TASK [openshift_storage_glusterfs : Verify heketi service]
*********************************************************
Monday 02 December 2019  10:51:09 -0500 (0:00:00.146)       0:02:21.590
*******
 [WARNING]: Module invocation had junk after the JSON data: Last login: Mon
Dec  2 10:50:55 -05 2019

fatal: [okdpmasterctn01.example.com]: FAILED! => {"changed": false, "cmd":
["oc", "--config=/tmp/openshift-glusterfs-ansible-AmgQBn/admin.kubeconfig",
"rsh", "--namespace=app-storage", "deploy-heketi-storage-1-f6hb4",
"heketi-cli", "-s", "http://localhost:8080";, "--user", "admin", "--secret",
"", "cluster", "list"], "delta": "0:00:01.400200", "end": "2019-12-02
10:50:57.718776", "msg": "non-zero return code", "rc": 255, "start":
"2019-12-02 10:50:56.318576", "stderr": "Error: Invalid JWT token:
signature is invalid (client and server secrets may not match)\ncommand
terminated with exit code 255", "stderr_lines": ["Error: Invalid JWT token:
signature is invalid (client and server secrets may not match)", "command
terminated with exit code 255"], "stdout": "", "stdout_lines": []}

The 3 disks for GlusterFS has a size of 300 GiB in each server.

This my /etc/ansible/hosts

https://pastebin.com/FCXHSmtz

kubectl get all --all-namespaces

app-storage      pod/deploy-heketi-storage-1-f6hb4                     1/1
      Running   0          2d

kubectl logs -n app-storage deploy-heketi-storage-1-f6hb4

https://pastebin.com/3QQUAYhs

kubectl describe pod deploy-heketi-storage-1-f6hb4 --namespace=app-storage

https://pastebin.com/3hTNKfug

/var/log/glusterfs/cli.log

[2019-12-02 18:44:51.425942] I [cli.c:773:main] 0-cli: Started running
gluster with version 4.1.9
[2019-12-02 18:44:51.433414] I
[cli-cmd-volume.c:2375:cli_check_gsync_present] 0-: geo-replication not
installed
[2019-12-02 18:44:51.433846] I [MSGID: 101190]
[event-epoll.c:617:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2019-12-02 18:44:51.433921] I [socket.c:2632:socket_event_handler]
0-transport: EPOLLERR - disconnecting now
[2019-12-02 18:44:51.434235] W [rpc-clnt.c:1753:rpc_clnt_submit]
0-glusterfs: error returned while attempting to connect to host:(null),
port:0
[2019-12-02 18:44:51.474517] I [cli-rpc-ops.c:2316:gf_cli_set_volume_cbk]
0-cli: Received resp to set
[2019-12-02 18:44:51.474855] I [input.c:31:cli_batch] 0-: Exiting with: 0

/var/log/glusterfs/cmd_history.log

[2019-12-02 18:44:51.473957]  : volume set help : SUCCESS


/var/log/glusterfs/glusterd.log

[2019-12-02 18:44:51.459166] W [MSGID: 101095]
[xlator.c:181:xlator_volopt_dynload] 0-xlator:
/usr/lib64/glusterfs/4.1.9/xlator/nfs/server.so: cannot open shared object
file: No such file or directory
The message "W [MSGID: 101095] [xlator.c:181:xlator_volopt_dynload]
0-xlator: /usr/lib64/glusterfs/4.1.9/xlator/nfs/server.so: cannot open
shared object file: No such file or directory" repeated 30 times between
[2019-12-02 18:44:51.459166] and [2019-12-02 18:44:51.459461]

/usr/share/ansible/openshift-ansible/roles/openshift_storage_glusterfs/tasks/update_topology.yml

---
# This taskfile is called when adding new nodes doing node and master
# scaleup play.
- import_tasks: mktemp.yml

# l_gluster_reload_topo passed in via add_hosts.yml
- when: l_gluster_reload_topo | default(True)
  block:
  - import_tasks: glusterfs_config_facts.yml
  - import_tasks: label_nodes.yml
  - import_tasks: heketi_get_key.yml
  - import_tasks: heketi_pod_check.yml
  - import_tasks: wait_for_pods.yml
  - import_tasks: heketi_load.yml
    when:
    - glusterfs_nodes | default([]) | count > 0

# l_gluster_registry_reload_topo passed in via add_hosts.yml
- when: l_gluster_registry_reload_topo | default(True)
  block:
  - import_tasks: glusterfs_registry_facts.yml
  - import_tasks: label_nodes.yml
  - import_tasks: heketi_get_key.yml
  - import_tasks: heketi_pod_check.yml
  - import_tasks: wait_for_pods.yml
  - import_tasks: heketi_load.yml
    when:
    - glusterfs_nodes | default([]) | count > 0
    - "'glusterfs' not in groups or glusterfs_nodes != groups.glusterfs"

- import_tasks: rmtemp.yml
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to