Yeah, i did not included in installiation grafana, hawkular etc. I have ELK 
stack with okd integration..
I setup cluster on CentOS 7.5 with checked out openshift-ansible playbook’s 
repo “release-3.11” and inventory file looks like this (attached text file).

--

Best,

Alexander Kozhemyakin

From: <users-boun...@lists.openshift.redhat.com> on behalf of Bobby Corpus 
<bobby.cor...@icloud.com>
Date: Tuesday, 27 November 2018 at 15:46
To: Erekle Magradze <erekle.magra...@recogizer.de>
Cc: "users@lists.openshift.redhat.com" <users@lists.openshift.redhat.com>
Subject: Re: problem with installation of Okd 3.11


I managed to install OKD 3.11 as of today. I'll share my experience:



My setup:



NAME    STATUS    ROLES          AGE       VERSION



node1   Ready     infra,master   12h       v1.11.0+d4cacc0



node2   Ready     infra          12h       v1.11.0+d4cacc0



node3   Ready     compute        12h       v1.11.0+d4cacc0





I needed to do a lot of workarounds including restarting all nodes.



I bumped into this error:



- FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (180 
retries left).

- FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (179 
retries left).



and into this error



- TASK [openshift_cluster_monitoring_operator : Wait for the ServiceMonitor CRD 
to be created] ********************************

- Monday 26 November 2018  19:31:50 -0500 (0:00:01.251)       0:11:17.509 
*******

- FAILED - RETRYING: Wait for the ServiceMonitor CRD to be created (30 retries 
left).

- FAILED - RETRYING: Wait for the ServiceMonitor CRD to be created (29 retries 
left).



and i noticed that



- No networks found in /etc/cni/net.d



There was no file in this directory. In my other successful installations of an 
OpenShift enterprise cluster v3.11, I have a file at that directory called 
80-openshift-network.conf



To workaround this, I created the file /etc/cni/net.d/80-openshift-network.conf 
with contents



{

  "cniVersion": "0.2.0",

  "name": "openshift-sdn",

  "type": "openshift-sdn"

}







I then copied this file to all nodes in the cluster.



Then I restarted all the nodes and I did ansible-playbook deploy_cluster again. 
This time I was lucky. The install proceeded.



However, I have to check that:



1. The file

/etc/dnsmasq.d/origin-upstream-dns.conf



is present and has the information of my upstream dns:



For example,



cat  /etc/dnsmasq.d/origin-upstream-dns.conf



server=8.8.8.8







If not present, create I created it and restarted dnsmasq



2. That i'm able to resolve using the dnsmasq in all nodes,

For example,



ansible nodes -a 'dig yahoo.com'



3. The file /etc/resolv.conf is correct



cat /etc/resolv.conf



# nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh



search cluster.local

nameserver xxx.xxx.xxx.xxx



where xxx.xxx.xxx.xxx is the ip of the current node.



4. The file /etc/cni/net.d/80-openshift-network.conf is present and has contents



{

  "cniVersion": "0.2.0",

  "name": "openshift-sdn",

  "type": "openshift-sdn"

}



5. My grafana pod did not run. The error was "No API token found for service 
account "grafana", retry after the token is automatically created and added to 
the service account"



oc describe sa grafana -n openshift-monitoring

Name:                grafana

Namespace:           openshift-monitoring

Labels:              <none>

Annotations:         
serviceaccounts.openshift.io/oauth-redirectreference.grafana={"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"grafana"}}

Image pull secrets:  grafana-dockercfg-dnvvm

Mountable secrets:   grafana-dockercfg-dnvvm

                     grafana-token-6sw2j

Tokens:              grafana-token-6sw2j

Events:              <none>



There was one token and I deleted it using:



oc delete secret grafana-token-6sw2j



Two new tokens were generated and grafana pod was successfully run.



Best regards,



Bobby



On Nov 26, 2018, at 09:46 PM, Erekle Magradze <erekle.magra...@recogizer.de> 
wrote:
Hello Guys,

Did anyone face the similar problem? it says that network component of K8S has 
some problem in installation.

So, I am failing at the final steps of command

ansible-playbook 
/usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

problem looks like this

TASK [openshift_node_group : Wait for sync DS to set annotations on master 
nodes] 
***********************************************************************************************************************************************************
FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (180 
retries left).
FAILED - RETRYING: Wait for sync DS to set annotations on master nodes (179 
retries left).
...
...
...

The final message looks like this

fatal: [os-master.apps.mydomain.net]: FAILED! => {"attempts": 180, "changed": 
false, "results": {"cmd": "/usr/bin/oc get node --selector= -o json -n 
default", "results": [{"apiVersion": "v1", "items": [{"apiVersion": "v1", 
"kind": "Node", "metadata": {"annotations": 
{"volumes.kubernetes.io/controller-managed-attach-detach": "true"}, 
"creationTimestamp": "2018-11-25T21:49:08Z", "labels": 
{"beta.kubernetes.io/arch": "amd64", "beta.kubernetes.io/os": "linux", 
"kubernetes.io/hostname": "os-master.apps.mydomain.net"}, "name": 
"os-master.apps.mydomain.net", "namespace": "", "resourceVersion": "33274", 
"selfLink": "/api/v1/nodes/os-master.apps.mydomain.net", "uid": 
"efd782f8-f0fb-11e8-a72f-001a4a160102"}, "spec": {}, "status": {"addresses": 
[{"address": "172.31.1.71", "type": "InternalIP"}, {"address": 
"os-master.apps.mydomain.net", "type": "Hostname"}], "allocatable": {"cpu": 
"16", "hugepages-2Mi": "0", "memory": "32676788Ki", "pods": "250"}, "capacity": 
{"cpu": "16", "hugepages-2Mi": "0", "memory": "32779188Ki", "pods": "250"}, 
"conditions": [{"lastHeartbeatTime": "2018-11-26T05:44:12Z", 
"lastTransitionTime": "2018-11-25T21:49:08Z", "message": "kubelet has 
sufficient disk space available", "reason": "KubeletHasSufficientDisk", 
"status": "False", "type": "OutOfDisk"}, {"lastHeartbeatTime": 
"2018-11-26T05:44:12Z", "lastTransitionTime": "2018-11-25T21:49:08Z", 
"message": "kubelet has sufficient memory available", "reason": 
"KubeletHasSufficientMemory", "status": "False", "type": "MemoryPressure"}, 
{"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": 
"2018-11-25T21:49:08Z", "message": "kubelet has no disk pressure", "reason": 
"KubeletHasNoDiskPressure", "status": "False", "type": "DiskPressure"}, 
{"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": 
"2018-11-25T21:49:08Z", "message": "kubelet has sufficient PID available", 
"reason": "KubeletHasSufficientPID", "status": "False", "type": "PIDPressure"}, 
{"lastHeartbeatTime": "2018-11-26T05:44:12Z", "lastTransitionTime": 
"2018-11-25T21:49:08Z", "message": "runtime network not ready: 
NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin 
is not ready: cni config uninitialized", "reason": "KubeletNotReady", "status": 
"False", "type": "Ready"}], "daemonEndpoints": {"kubeletEndpoint": {"Port": 
10250}}, "images": [{"names": 
["docker.io/openshift/origin-node@sha256:8a8e6341cc3af32953ee2a313a5f0973bed538f0326591c48188f4de9617c992",
 "docker.io/openshift/origin-node:v3.11.0"], "sizeBytes": 1157253404}, 
{"names": 
["docker.io/openshift/origin-control-plane@sha256:181069f5d67cc2ba8d9b3c80efab8eda107eb24140f2d9f4a394cdb164c28a86",
 "docker.io/openshift/origin-control-plane:v3.11.0"], "sizeBytes": 818390387}, 
{"names": 
["docker.io/openshift/origin-pod@sha256:1641b78e32c100938b2db51088e284568a056a3716492db78335a3e35be03853",
 "docker.io/openshift/origin-pod:v3.11.0"], "sizeBytes": 253795602}, {"names": 
["quay.io/coreos/etcd@sha256:43fbc8a457aa0cb887da63d74a48659e13947cb74b96a53ba8f47abb6172a948",
 "quay.io/coreos/etcd:v3.2.22"], "sizeBytes": 37269372}], "nodeInfo": 
{"architecture": "amd64", "bootID": "2c873547-4546-45ae-af3b-bd685a7d556f", 
"containerRuntimeVersion": "docker://1.13.1", "kernelVersion": 
"3.10.0-862.14.4.el7.x86_64", "kubeProxyVersion": "v1.11.0+d4cacc0", 
"kubeletVersion": "v1.11.0+d4cacc0", "machineID": 
"159ec545080a4b849d70b5b10694bd1a", "operatingSystem": "linux", "osImage": 
"CentOS Linux 7 (Core)", "systemUUID": 
"159EC545-080A-4B84-9D70-B5B10694BD1A"}}}], "kind": "List", "metadata": 
{"resourceVersion": "", "selfLink": ""}}], "returncode": 0}, "state": "list"}

In /var/logs/messages of the master node I see the following

Nov 26 06:55:55 os-master origin-node: E1126 06:55:53.495294    5353 
kubelet.go:2101] Container runtime network not ready: NetworkReady=false 
reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni 
config uninitialized
Nov 26 06:55:58 os-master origin-node: W1126 06:55:58.496250    5353 
cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:55:58 os-master origin-node: E1126 06:55:58.496391    5353 
kubelet.go:2101] Container runtime network not ready: NetworkReady=false 
reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni 
config uninitialized
Nov 26 06:56:03 os-master origin-node: W1126 06:56:03.497529    5353 
cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:56:03 os-master origin-node: E1126 06:56:03.497667    5353 
kubelet.go:2101] Container runtime network not ready: NetworkReady=false 
reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni 
config uninitialized
Nov 26 06:56:09 os-master origin-node: I1126 06:56:07.931551    5353 
container_manager_linux.go:428] [ContainerManager]: Discovered runtime cgroups 
name: /system.slice/docker.service
Nov 26 06:56:09 os-master origin-node: W1126 06:56:08.498812    5353 
cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:56:09 os-master origin-node: E1126 06:56:08.498929    5353 
kubelet.go:2101] Container runtime network not ready: NetworkReady=false 
reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni 
config uninitialized
Nov 26 06:56:13 os-master origin-node: W1126 06:56:13.499781    5353 
cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Nov 26 06:56:13 os-master origin-node: E1126 06:56:13.500567    5353 
kubelet.go:2101] Container runtime network not ready: NetworkReady=false 
reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni 
config uninitialized

Can you please advise what to do in this case and how to solve the problem?

Many Thanks in advance

Best Regards

Erekle
_______________________________________________
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users
{\rtf1\ansi\ansicpg1252\cocoartf1671\cocoasubrtf100
{\fonttbl\f0\fswiss\fcharset0 Helvetica;}
{\colortbl;\red255\green255\blue255;}
{\*\expandedcolortbl;;}
\paperw11900\paperh16840\margl1440\margr1440\vieww22980\viewh15080\viewkind0
\pard\tx566\tx1133\tx1700\tx2267\tx2834\tx3401\tx3968\tx4535\tx5102\tx5669\tx6236\tx6803\pardirnatural\partightenfactor0

\f0\fs24 \cf0 [masters]\
dca-master[1:3]\
\
[etcd]\
dca-master[1:3]\
\
[nodes]\
dca-master[1:3] openshift_node_group_name='node-config-master'\
dca-infra[1:2] openshift_node_group_name='node-config-infra'\
dca-node[1:3] openshift_node_group_name='node-config-compute'\
\
[nfs]\
\
[lb]\
\
[new_masters]\
\
[new_nodes]\
\
\
[OSEv3:children]\
masters\
nodes\
etcd\
#lb\
#nfs\
#new_nodes\
#new_masters\
\
\
[OSEv3:vars]\
openshift_master_default_subdomain=XXXX\
openshift_install_examples=true\
openshift_master_named_certificates=[\{"certfile": "/opt/origin/clusters/XX/certs/XXX.crt", "keyfile": "/opt/origin/clusters/XX/XXX.key", "cafile": "/opt/origin/certs/ca-web.crt", "names": ['XXXXX']\}]\
openshift_release='3.11'\
openshift_pkg_version='-3.11.0'\
openshift_image_tag='v3.11.0'\
debug_level=2\
openshift_master_cluster_public_hostname=XXXX\
openshift_use_openshift_sdn=true\
os_sdn_network_plugin_name='redhat/openshift-ovs-networkpolicy'\
openshift_hosted_router_replicas=2\
openshift_hosted_router_selector='node-role.kubernetes.io/infra=true'\
openshift_hosted_routers=XXXX\
openshift_hosted_manage_registry=false\
openshift_hosted_manage_registry_console=false\
openshift_metrics_install_metrics=false\
openshift_master_cluster_hostname=XXX\
ansible_become=true\
openshift_master_overwrite_named_certificates=true\
openshift_master_open_ports=[\{"service":"zabbix","port":"10050/tcp"\}]\
openshift_node_open_ports=[\{"service":"zabbix","port":"10050/tcp"\}]\
openshift_node_dnsmasq_additional_config_file='/opt/origin/files/dnsmasq-additional-config'\
openshift_deployment_type=origin\
openshift_master_identity_providers=XXXX\
openshift_master_ldap_ca_file='/opt/origin/certs/DigiCertCA.crt'\
ansible_ssh_user=XXXXX\
osn_storage_plugin_deps=['nfs','iscsi']\
openshift_master_cluster_method=native\
openshift_master_api_port=8443\
openshift_master_console_port=8443\
openshift_clock_enabled=true\
osm_use_cockpit=true\
osm_cockpit_plugins=['cockpit-kubernetes']\
openshift_docker_options="--log-driver=journald --signature-verification=false -l warn --ipv6=false --storage-driver=overlay2"\
openshift_docker_selinux_enabled=True\
openshift_enable_service_catalog=false\
template_service_broker_install=false\
openshift_cluster_monitoring_operator_install=false\
penshift_additional_repos=[\{'id': 'centos-openshift-origin-311', 'name': 'CentOS-OpenShift-Origin-311', 'baseurl': 'http://mirror.centos.org/centos/7/paas/x86_64/openshift-origin311/', 'enabled': 1, 'gpgcheck': 1, 'gpgkey': 'https://raw.githubusercontent.com/CentOS-PaaS-SIG/centos-release-paas-common/master/RPM-GPG-KEY-CentOS-SIG-PaaS'\}]}
_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to