+Piotr

On Tue, Aug 23, 2016 at 4:18 PM, Sean Jezewski <[email protected]> wrote:

> I'm trying to setup logging on my GKE cluster by following the
> suggestions from this thread
> <https://groups.google.com/forum/#!searchin/kubernetes-users/fluentd%7Csort:relevance/kubernetes-users/Q1nvl8IAbqc/BXOZNBUWAgAJ>,
> but have hit a wall.
>
> Specifically, I've:
>
> 1) Created a new cluster on GKE with google logging disabled
> 2) Ran `kubectl create -f ...` on each of the following 5 components:
>
> # From: https://github.com/kubernetes/kubernetes/tree/
> dae5ac482861382e18b1e7b2943b1b7f333c6a2a/cluster/addons/
> fluentd-elasticsearch
> es-controller.yaml
> es-service.yaml
> kibana-controller.yaml
> kibana-service.yaml
>
> # From: https://github.com/kubernetes/kubernetes/blob/
> 714db746118923a3918e28aacb4564e91afdd368/cluster/saltbase/
> salt/fluentd-es/fluentd-es.yaml
> fluentd-es.yaml
>
> However I'm seeing some odd results. My service/rc/pods for kibana and
> elastic search are created, then die off very quickly.
>
> Here's what it looks like before those components die off:
>
> $kubectl get all --namespace=kube-system
> NAME                                                           DESIRED
>    CURRENT       AGE
> elasticsearch-logging-v1                                       2
>      2             4s
> kibana-logging-v1                                              1
>      1             3s
> kube-dns-v17.1                                                 2
>      2             2h
> kubernetes-dashboard-v1.1.1                                    1
>      1             2h
> l7-default-backend-v1.0                                        1
>      1             2h
> NAME                                                           CLUSTER-IP
>     EXTERNAL-IP   PORT(S)         AGE
> default-http-backend                                           10.3.245.50
>    <nodes>       80/TCP          2h
> elasticsearch-logging                                          10.3.242.22
>    <none>        9200/TCP        5s
> heapster                                                       10.3.
> 246.252   <none>        80/TCP          2h
> kibana-logging                                                 10.3.241.94
>    <nodes>       5601/TCP        3s
> kube-dns                                                       10.3.240.10
>    <none>        53/UDP,53/TCP   2h
> kubernetes-dashboard                                           10.3.
> 242.135   <none>        80/TCP          2h
> NAME                                                           READY
>    STATUS        RESTARTS        AGE
> elasticsearch-logging-v1-lq5gx                                 1/1
>      Running       0               4s
> elasticsearch-logging-v1-p78mn                                 1/1
>      Running       0               4s
> fluentd-elasticsearch                                          1/1
>      Running       0               6s
> heapster-v1.1.0-2096339923-4j1vu                               2/2
>      Running       0               2h
> kibana-logging-v1-km83q                                        1/1
>      Running       0               3s
> kube-dns-v17.1-aqteq                                           3/3
>      Running       0               2h
> kube-dns-v17.1-qcm9z                                           3/3
>      Running       0               2h
> kube-proxy-gke-pachyderm-log-test-default-pool-8429ab58-1sio   1/1
>      Running       0               2h
> kube-proxy-gke-pachyderm-log-test-default-pool-8429ab58-b7tw   1/1
>      Running       0               2h
> kube-proxy-gke-pachyderm-log-test-default-pool-8429ab58-eead   1/1
>      Running       0               2h
> kubernetes-dashboard-v1.1.1-aajls                              1/1
>      Running       0               2h
> l7-default-backend-v1.0-filol                                  1/1
>      Running       0               2h
>
> Normally I'd get the logs using the previous flag, but that doesn't seem
> to work. However if I get the logs fast enough, I can see a few things, but
> nothing out of the ordinary.
>
> $kubectl logs elasticsearch-logging-v1-lq5gx --namespace=kube-system
> I0823 21:16:06.814369       5 elasticsearch_logging_discovery.go:42]
> Kubernetes Elasticsearch logging discovery
> I0823 21:16:07.829359       5 elasticsearch_logging_discovery.go:75] Found
> ["10.0.1.4" "10.0.2.6"]
>
> Which looks pretty normal to me. Grabbing the logs from the kibana pod
> quickly, I see:
>
> $kubectl logs kibana-logging-v1-km83q --namespace=kube-system
> ELASTICSEARCH_URL=http://elasticsearch-logging:9200
> {"@timestamp":"2016-08-23T21:16:10.223Z","level":"error","node_env":
> "production","error":"Request error, retrying -- connect ECONNREFUSED"}
> {"@timestamp":"2016-08-23T21:16:10.227Z","level":"warn","message":"Unable
> to revive connection: http://elasticsearch-logging:9200/","node_env":
> "production"}
> {"@timestamp":"2016-08-23T21:16:10.227Z","level":"warn","message":"No
> living connections","node_env":"production"}
> {"@timestamp":"2016-08-23T21:16:10.229Z","level":"info","message":"Unable
> to connect to elasticsearch at http://elasticsearch-logging:9200.
> Retrying in 2.5 seconds.","node_env":"production"}
>
> Which makes sense ... considering the elastic search pod dies off very
> quickly.
>
> The fluentd pod reports an error connecting to elastic search, but again,
> thats not surprising considering it quickly dies off.
>
> Clearly, I'm missing some steps in configuring these services to connect
> to each other. Any advice along these lines is appreciated.
>
> I know that I should probably be using a daemon set to spin up the fluentd
> pod, but I thought I could get these components wired together before
> worrying about that step. Maybe thats not the case. Either way, I'd
> appreciate any pointers in setting that up as well.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Kubernetes user discussion and Q&A" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at https://groups.google.com/group/kubernetes-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.
  • [kubernetes-... Sean Jezewski
    • Re: [ku... 'Vishnu Kannan' via Kubernetes user discussion and Q&A
      • Re:... Sean Jezewski
        • ... 'Piotr Szczesniak' via Kubernetes user discussion and Q&A
        • ... Sean Jezewski

Reply via email to