Hi Andrej, Thanks for this useful information. I've changed my config and restart the playbook. Unfortunately, the problem still occurs but I think that the issue is elsewhere, as indicated in the logs.
Best regards, J -----Original Message----- From: Andrej Golis <andrej.go...@gmail.com> Sent: Donnerstag, 23. Mai 2019 18:15 To: Jérôme Meyer <jerome.me...@lcsystems.ch> Cc: users@lists.openshift.redhat.com Subject: Re: Issue by installing OKD OpenShift 3.11 Hi, if you have master and etcd colocated on the same node, you should use 'node-config-master-infra' node group instead of 'node-config-master'. Check the last 2 paragraphs of [1]. Andrej [1] https://docs.openshift.com/container-platform/3.11/install/configuring_inventory_file.html#configuring-dedicated-infrastructure-nodes On Thu, May 23, 2019 at 10:42 AM Jérôme Meyer <jerome.me...@lcsystems.ch> wrote: > > Dear Team, > > I've encountered some issue to installing openshift (okd 3.11) on 3 vms (1 > master and 2 nodes). > I followed the recommendations and procedure as described in docs. > Then I launched the ansible prerequiste playbook without issue, all was fine. > But unfortunately the deploy_cluster playbook didn't finished. > Some errors appears when he start the pod. > > 2019-05-17 16:58:52,157 p=6592 u=root | FAILED - RETRYING: Wait for control > plane pods to appear (2 retries left). > 2019-05-17 16:58:57,607 p=6592 u=root | FAILED - RETRYING: Wait for control > plane pods to appear (1 retries left). > 2019-05-17 16:59:02,998 p=6592 u=root | failed: > [master.lab.oshift.edu] (item=etcd) => {"attempts": 60, "changed": > false, "item": "etcd", "msg": {"cmd": "/usr/bin/oc get pod > master-etcd-master.lab.oshift.edu -o json -n kube-system", "results": > [{}], "returncode": 1, "stderr": "The connection to the server > master:8443 was refused - did you specify the right host or port?\n", > "stdout": ""}} > 2019-05-17 16:59:03,531 p=6592 u=root | FAILED - RETRYING: Wait for control > plane pods to appear (60 retries left). > 2019-05-17 16:59:08,980 p=6592 u=root | FAILED - RETRYING: Wait for control > plane pods to appear (59 retries left). > > Regarding this issue, I've checked the master server and I didn't seen the > http port 8443 open or no http/nginx/or whatever service are running, > strange..... > > > DNS server was installed on a vm called services and the dig command was ok. > > > > Please let me know if I failed to install something or is the inventory > config wrong? what should I do to troubleshoot this problem? > > Thanks and best regards, J. > > > > > > Here's the inventory file: > > > > # cat inventory/hosts > ##################################################################### > # > # HOSTS configuration for our labs > # > # 2019-05-17 > # > ##################################################################### > > [workstation] > workstation.lab.oshift.edu > > [masters] > master.lab.oshift.edu > > [etcd] > master.lab.oshift.edu > > [nodes] > master.lab.oshift.edu openshift_node_group_name="node-config-master" > node1.lab.oshift.edu openshift_node_group_name="node-config-compute" > node2.lab.oshift.edu openshift_node_group_name="node-config-compute" > > [nfs] > services.lab.oshift.edu > > # Create an OSEv3 group that contains the masters and nodes groups > [OSEv3:children] masters nodes etcd nfs > > [OSEv3:vars] > ############################################################################### > # Common/ Required configuration variables follow > # > ###################################################################### > ######### > # How ansible access hosts > ansible_user=root > ansible_become=true > > openshift_deployment_type=origin > > openshift_release="3.11" > > openshift_master_default_subdomain=apps.lab.oshift.edu > > ############################################################################### > # Additional configuration variables follow > # > ###################################################################### > ######### > > # DEBUG > debug_level=4 > > # DISABLE SOME CHECKS > openshift_disable_check=disk_availability,memory_availability,docker_s > torage > > # Enable etcd debug logging, defaults to false etcd_debug=true # Set > etcd log levels by package > etcd_log_package_levels="etcdserver=WARNING,security=INFO" > > # htpasswd auth > openshift_master_identity_providers=[{'name': 'htpasswd_auth', > 'login': 'true', 'challenge': 'true', 'kind': > 'HTPasswdPasswordIdentityProvider'}] > # Defining htpasswd users > openshift_master_htpasswd_users={'admin': > '$apr1$Ky/ZY39n$Z8/t3xJsnxGANzypVTtmD0', 'developer': > '$apr1$MdVAOTmy$8nB.ANU4OeciLjDeU68w/1'} > > # Option B - External NFS Host > openshift_hosted_registry_storage_kind=nfs > openshift_hosted_registry_storage_access_modes=['ReadWriteMany'] > openshift_hosted_registry_storage_nfs_directory=/openshift_storage > openshift_hosted_registry_storage_nfs_options='*(rw,root_squash)' > openshift_hosted_registry_storage_volume_name=registry > openshift_hosted_registry_storage_volume_size=10Gi > > # ENABLE FIREWALLD > os_firewall_use_firewalld=true > [root@workstation openshift-ansible]# > > > > _______________________________________________ > users mailing list > users@lists.openshift.redhat.com > http://lists.openshift.redhat.com/openshiftmm/listinfo/users
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ users mailing list users@lists.openshift.redhat.com http://lists.openshift.redhat.com/openshiftmm/listinfo/users