Thank you Rich for your prompt reply.

After viewing  the 
"manifests/4.2/cluster-logging.v4.2.0.clusterserviceversion.yaml " on the  
cluster-logging-operator pod, I confirm that the added (minKubeVersion: 1.16.0) 
line in GITHUB  is missing in the manifest file on the CLO pod on my Cluster.

I tried to edit the manifest file thru "oc rsh vi"  but the file is in ReadOnly 
and I can't get root access to this pod.

What is the good method of editing the manifest yaml file to update it with the 
missing minKubeversion ?

Thank you.


-----Original Message-----
From: "Rich Megginson" [rmegg...@redhat.com]
Date: 11/06/2019 01:21 PM
To: users@lists.openshift.redhat.com
Subject: Re: The cluster-logging pods (Elasticsearch, Kibana, Fluentd) don't
        start - Openshift 4.1

are you running into https://bugzilla.redhat.com/show_bug.cgi?id=1766343 ?

On 11/6/19 9:19 AM, Full Name wrote:
> Hi all,
> 
> I'm trying to deploy logging on Openshift cluster 4.1.21 using the procedure 
> described in the following link 
> https://docs.openshift.com/container-platform/4.1/logging/efk-logging.html.
> Everything is going fine but the logging pods don't want to start and stay at 
> pending state.  I have the following error (0/7 nodes are available: 7 
> node(s) didn't match node selector) for all the 5 logging pods (2 x 
> elasticsearch,  2 x kibana,  1x curator).
> 
> The logging pods don't start  with or without nodeSelector in the 
> Cluster-Logging instance.
> 
> -----------------------------------------------------------
> the Cluster-Logging instance YAML file:
> -------
> apiVersion: logging.openshift.io/v1
> kind: ClusterLogging
> metadata:
>    creationTimestamp: '2019-11-04T21:20:57Z'
>    generation: 37
>    name: instance
>    namespace: openshift-logging
>    resourceVersion: '569806'
>    selfLink: >-
>      
> /apis/logging.openshift.io/v1/namespaces/openshift-logging/clusterloggings/instance
>    uid: fdc0e971-ff48-11e9-a3f8-0af5a0903ee4
> spec:
>    collection:
>      logs:
>        fluentd:
>          nodeSelector:
>            kubernetes.io/os: linux
>            node-role.kubernetes.io/infra: ''
>          resources: null
>        rsyslog:
>          resources: null
> type: fluentd
>    curation:
>      curator:
>        nodeSelector:
>          kubernetes.io/os: linux
>          node-role.kubernetes.io/infra: ''
>        resources: null
>        schedule: 30 3 * * *
>      type: curator
>    logStore:
>      elasticsearch:
>        nodeCount: 2
>        nodeSelector:
>          node-role.kubernetes.io/infra: ''
>        redundancyPolicy: SingleRedundancy
>        resources:
>          requests:
>            cpu: 500m
>            memory: 4Gi
>        storage:
>         size: 20G
>          storageClassName: gp2
>      type: elasticsearch
>    managementState: Managed
>    visualization:
>      kibana:
>        nodeSelector:
>          kubernetes.io/os: linux
>          node-role.kubernetes.io/infra: ''
>        proxy:
>          resources: null
>        replicas: 1
>        resources: null
>      type: kibana
> status:
>    collection:
>      logs:
>        fluentdStatus:
>          daemonSet: fluentd
>          nodes: {}
>          pods:
>            failed: []
>            notReady: []
>            ready: []
>        rsyslogStatus:
>          Nodes: null
>        daemonSet: ''
>          pods: null
>    curation:
>      curatorStatus:
>        - clusterCondition:
>            curator-1572924600-pwbf8:
>              - lastTransitionTime: '2019-11-05T03:30:01Z'
>                message: '0/7 nodes are available: 7 node(s) didn''t match 
> node selector.'
>                reason: Unschedulable
>                status: 'True'
>                type: Unschedulable
>          cronJobs: curator
>          schedules: 30 3 * * *
>          suspended: false
>    logStore:
>     elasticsearchStatus:
>        - ShardAllocationEnabled: shard allocation unknown
>          cluster:
>            numDataNodes: 0
>            initializingShards: 0
>            numNodes: 0
>            activePrimaryShards: 0
>            status: cluster health unknown
>            pendingTasks: 0
>            relocatingShards: 0
>            activeShards: 0
>            unassignedShards: 0
>          clusterName: elasticsearch
>          nodeConditions:
>            elasticsearch-cdm-wgsf9ygw-1:
>         - lastTransitionTime: '2019-11-04T22:33:32Z'
>                message: '0/7 nodes are available: 7 node(s) didn''t match 
> node selector.'
>                reason: Unschedulable
>                status: 'True'
>                type: Unschedulable
>          elasticsearch-cdm-wgsf9ygw-2:
>              - lastTransitionTime: '2019-11-04T22:33:33Z'
>                message: '0/7 nodes are available: 7 node(s) didn''t match 
> node selector.'
>                reason: Unschedulable
>                status: 'True'
>                type: Unschedulable
>          nodeCount: 2
>          pods:
>            client:
>              failed: []
>              notReady:
>                - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
>                - elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph
>              ready: []
>            data:
>              failed: []
>              notReady:
>                - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
>                - elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph
>              ready: []
>            master:
>              failed: []
>              notReady:
>                - elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk
>                - elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph
>              ready: []
>    visualization:
>      kibanaStatus:
>        - clusterCondition:
>            kibana-99dc6bb95-5848h:
>              - lastTransitionTime: '2019-11-04T22:00:49Z'
>                message: '0/7 nodes are available: 7 node(s) didn''t match 
> node selector.'
>                reason: Unschedulable
>                status: 'True'
>                type: Unschedulable
>            kibana-fb96dc875-wk4w5:
>              - lastTransitionTime: '2019-11-04T22:33:26Z'
>                message: '0/7 nodes are available: 7 node(s) didn''t match 
> node selector.'
>                reason: Unschedulable
>                status: 'True'
>          type: Unschedulable
>          deployment: kibana
>          pods:
>            failed: []
>            notReady:
>             - kibana-99dc6bb95-5848h
>              - kibana-fb96dc875-wk4w5
>            ready: []
>          replicaSets:
>            - kibana-5d77fb4b85
>            - kibana-99dc6bb95
>            - kibana-fb96dc875
>          replicas: 1
> -------
> 
> The 2 Infra nodes are labeled corrcetly  : node-role.kubernetes.io/infra: ''.
> -------------
> [mohamed.hamouch-capgemini.com@clientvm 0 ~]$ oc get nodes --show-labels
> NAME                                            STATUS   ROLES          AGE 
> VERSION             LABELS
> ip-10-0-130-209.eu-central-1.compute.internal   Ready    master         33h   
> v1.13.4+a80aad556   
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a,kubernetes.io/hostname=ip-10-0-130-209,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
> ip-10-0-134-187.eu-central-1.compute.internal   Ready    worker         33h   
> v1.13.4+a80aad556   
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a,kubernetes.io/hostname=ip-10-0-134-187,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
> ip-10-0-141-221.eu-central-1.compute.internal   Ready    infra,worker   31h   
> v1.13.4+a80aad556   
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1a,infra=infra,kubernetes.io/hostname=ip-10-0-141-221,node-role.kubernetes.io/infra=,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
> ip-10-0-150-157.eu-central-1.compute.internal   Ready    worker         33h   
> v1.13.4+a80aad556   
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.large,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b,kubernetes.io/hostname=ip-10-0-150-157,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
> ip-10-0-152-34.eu-central-1.compute.internal    Ready    master         33h   
> v1.13.4+a80aad556   
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b,kubernetes.io/hostname=ip-10-0-152-34,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
> ip-10-0-159-5.eu-central-1.compute.internal     Ready    infra,worker   31h   
> v1.13.4+a80aad556   
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m4.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1b,infra=infra,kubernetes.io/hostname=ip-10-0-159-5,node-role.kubernetes.io/infra=,node-role.kubernetes.io/worker=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
> ip-10-0-165-162.eu-central-1.compute.internal   Ready    master         33h   
> v1.13.4+a80aad556   
> beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m5.xlarge,beta.kubernetes.io/os=linux,failure-domain.beta.kubernetes.io/region=eu-central-1,failure-domain.beta.kubernetes.io/zone=eu-central-1c,kubernetes.io/hostname=ip-10-0-165-162,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos,node.openshift.io/os_version=4.1
> -------------
> 
> The logging pods (elasticsearch, kibana and curator):
> [mohamed.hamouch-capgemini.com@clientvm 0 ~]$ oc get pods --show-labels -n 
> openshift-logging
> NAME                                            READY   STATUS    RESTARTS   
> AGE LABELS
> cluster-logging-operator-bd64d698d-8xzxw        1/1     Running   0          
> 25h   name=cluster-logging-operator,pod-template-hash=bd64d698d
> curator-1572924600-pwbf8                        0/1     Pending   0          
> 18h   
> component=curator,controller-uid=8cc4c661-ff7c-11e9-b9e8-0226c8b0ff44,job-name=curator-1572924600,logging-infra=curator,provider=openshift
> elasticsearch-cdm-wgsf9ygw-1-6f49f466dc-57dbk   0/2     Pending   0          
> 23h   
> cluster-name=elasticsearch,component=elasticsearch,es-node-client=true,es-node-data=true,es-node-master=true,node-name=elasticsearch-cdm-wgsf9ygw-1,pod-template-hash=6f49f466dc,tuned.openshift.io/elasticsearch=true
> elasticsearch-cdm-wgsf9ygw-2-5777666679-2z4ph   0/2     Pending   0          
> 23h   
> cluster-name=elasticsearch,component=elasticsearch,es-node-client=true,es-node-data=true,es-node-master=true,node-name=elasticsearch-cdm-wgsf9ygw-2,pod-template-hash=5777666679,tuned.openshift.io/elasticsearch=true
> kibana-99dc6bb95-5848h                          0/2     Pending   0          
> 24h   
> component=kibana,logging-infra=kibana,pod-template-hash=99dc6bb95,provider=openshift
> kibana-fb96dc875-wk4w5                          0/2     Pending   0          
> 23h   
> component=kibana,logging-infra=kibana,pod-template-hash=fb96dc875,provider=openshift
> ---------
> 
> At what part should I look  to fix this issue?
> 
> Thank you very much for your help.
> 
> haed98.
> 
> _______________________________________________
> users mailing list
> users@lists.openshift.redhat.com
> http://lists.openshift.redhat.com/openshiftmm/listinfo/users
> 

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users




_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to