Hi Eric,
I think I got it.
DNS was blocked on the master nodes. After I allowed it ES is not throwing the 
resolving warning anymore and Kibana boots up as expected.

Thanks one more time for your help!
Greetings,
   Sebastian



On 20 Apr 2016, at 9:54 PM, Eric Wolinetz 
<ewoli...@redhat.com<mailto:ewoli...@redhat.com>> wrote:

Hi Sebastian,

Your Elasticsearch instance does not seem to have started up completely within 
the pod you showed logs for.  Kibana will fail to start up if it is unable to 
reach its Elasticsearch instance after a certain period of time.

Can you send some more of your Elasticsearch logs?  It looks like its currently 
recovering/initializing.  Do you see any different ERROR messages within there?

Your Fluentd errors looks to be something else. What does the following look 
like?
    $ oc describe pod -l component=fluentd

On Wed, Apr 20, 2016 at 2:54 AM, Sebastian Wieseler 
<sebast...@myrepublic.com.sg<mailto:sebast...@myrepublic.com.sg>> wrote:
Dear community,
I followed the guide 
https://docs.openshift.org/latest/install_config/aggregate_logging.html

NAME                          READY     STATUS    RESTARTS   AGE
logging-kibana-1-uwob1        1/2       Error     12         43m


$ oc logs logging-kibana-1-uwob1  -c kibana
{"name":"Kibana","hostname":"logging-kibana-1-uwob1","pid":7,"level":50,"err":{"message":"Request
 Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 
5000ms\n    at null.<anonymous> 
(/opt/app-root/src/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n
    at Timer.listOnTimeout [as ontimeout] 
(timers.js:112:15)"},"msg":"","time":"2016-04-20T07:16:15.760Z","v":0}
{"name":"Kibana","hostname":"logging-kibana-1-uwob1","pid":7,"level":60,"err":{"message":"Request
 Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 
5000ms\n    at null.<anonymous> 
(/opt/app-root/src/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n
    at Timer.listOnTimeout [as ontimeout] 
(timers.js:112:15)"},"msg":"","time":"2016-04-20T07:16:15.762Z","v":0}
[root@MRNZ-TS8-OC-MASTER-01 glusterfs]# oc logs logging-kibana-1-uwob1  -c 
kibana
{"name":"Kibana","hostname":"logging-kibana-1-uwob1","pid":7,"level":50,"err":{"message":"Request
 Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 
5000ms\n    at null.<anonymous> 
(/opt/app-root/src/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n
    at Timer.listOnTimeout [as ontimeout] 
(timers.js:112:15)"},"msg":"","time":"2016-04-20T07:38:40.789Z","v":0}
{"name":"Kibana","hostname":"logging-kibana-1-uwob1","pid":7,"level":60,"err":{"message":"Request
 Timeout after 5000ms","name":"Error","stack":"Error: Request Timeout after 
5000ms\n    at null.<anonymous> 
(/opt/app-root/src/src/node_modules/elasticsearch/src/lib/transport.js:282:15)\n
    at Timer.listOnTimeout [as ontimeout] 
(timers.js:112:15)"},"msg":"","time":"2016-04-20T07:38:40.790Z","v":0}


Elastic search pod is running, but the log shows:
[2016-04-20 
06:57:03,910][ERROR][io.fabric8.elasticsearch.plugin.acl.DynamicACLFilter] 
[Baphomet] Exception encountered when seeding initial ACL
org.elasticsearch.cluster.block.ClusterBlockException: blocked by: 
[SERVICE_UNAVAILABLE/1/state not recovered / initialized];
        at 
org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:151)
        at 
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction.checkGlobalBlock(TransportShardSingleOperationAction.java:103)
        at 
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.<init>(TransportShardSingleOperationAction.java:132)
        at 
org.elasticsearch.action.support.single.shard.TransportShardSingleOperationAction$AsyncSingleAction.<init>(TransportShardSingleOperationAction.java:116)

Fluent pod is running too, but the log shows:
2016-04-20 07:47:18 +0000 [error]: fluentd main process died unexpectedly. 
restarting.
2016-04-20 07:47:48 +0000 [error]: unexpected error error="getaddrinfo: Name or 
service not known"
  2016-04-20 07:47:48 +0000 [error]: /usr/share/ruby/net/http.rb:878:in 
`initialize'
  2016-04-20 07:47:48 +0000 [error]: /usr/share/ruby/net/http.rb:878:in `open'
  2016-04-20 07:47:48 +0000 [error]: /usr/share/ruby/net/http.rb:878:in `block 
in connect'
  2016-04-20 07:47:48 +0000 [error]: /usr/share/ruby/timeout.rb:52:in `timeout'
  2016-04-20 07:47:48 +0000 [error]: /usr/share/ruby/net/http.rb:877:in 
`connect'
  2016-04-20 07:47:48 +0000 [error]: /usr/share/ruby/net/http.rb:862:in 
`do_start'
  2016-04-20 07:47:48 +0000 [error]: /usr/share/ruby/net/http.rb:851:in `start'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/rest-client-1.8.0/lib/restclient/request.rb:413:in 
`transmit'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/rest-client-1.8.0/lib/restclient/request.rb:176:in 
`execute'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/rest-client-1.8.0/lib/restclient/request.rb:41:in 
`execute'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/rest-client-1.8.0/lib/restclient/resource.rb:51:in `get'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/kubeclient-1.1.2/lib/kubeclient/common.rb:310:in `block 
in api'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/kubeclient-1.1.2/lib/kubeclient/common.rb:51:in 
`handle_exception'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/kubeclient-1.1.2/lib/kubeclient/common.rb:309:in `api'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/kubeclient-1.1.2/lib/kubeclient/common.rb:304:in 
`api_valid?'
  2016-04-20 07:47:48 +0000 [error]: 
/opt/app-root/src/gems/fluent-plugin-kubernetes_metadata_filter-0.18.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:134:in
 `configure’
…
2016-04-20 07:50:42 +0000 [warn]: emit transaction failed: 
error_class=Fluent::ConfigError error="Exception encountered fetching metadata 
from Kubernetes API endpoint: getaddrinfo: Name or service not known" 
tag="kubernetes.var.log.containers.docker-registry-2-9tehj_default_registry-957340d6e4686b63bdcdccf18f1b0b4054d1faab9724097928f8102c3190f312.log"
  2016-04-20 07:50:42 +0000 [warn]: 
/opt/app-root/src/gems/fluent-plugin-kubernetes_metadata_filter-0.18.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:232:in
 `rescue in start_watch'
  2016-04-20 07:50:42 +0000 [warn]: 
/opt/app-root/src/gems/fluent-plugin-kubernetes_metadata_filter-0.18.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:228:in
 `start_watch'
  2016-04-20 07:50:42 +0000 [warn]: 
/opt/app-root/src/gems/fluent-plugin-kubernetes_metadata_filter-0.18.0/lib/fluent/plugin/filter_kubernetes_metadata.rb:140:in
 `block in configure'
2016-04-20 07:51:11 +0000 [warn]: emit transaction failed: 
error_class=SocketError error="getaddrinfo: Name or service not known" 
tag="kubernetes.var.log.containers.router-1-94cv3_default_router-3e8bc8d6d4ef8a52f8f46a7e83b5a510a4f1c8d63658cc49b3ce506822e84811.log”



Any idea why kibana is not starting properly?

Greetings,
   Sebastian



_______________________________________________
users mailing list
users@lists.openshift.redhat.com<mailto:users@lists.openshift.redhat.com>
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

_______________________________________________
users mailing list
users@lists.openshift.redhat.com
http://lists.openshift.redhat.com/openshiftmm/listinfo/users

Reply via email to