Re: Logstash stop communicating with Elasticsearch

2014-08-26 Thread Jilles van Gurp
I had some issues with logstash as well and ended up modifying the 
elasticsearch_http plugin to tell me what was going on. Turned out my 
cluster was red because my index template required more replicas than was 
possible:-). The problem was that logstash does not fail very gracefully 
and logging is not that great either (which I find ironic for a logging 
centric product). So I modified it to simply log the actual elastic search 
response, which was a 503 unavailable. From there it was pretty clear what 
to fix.

I filed a bug + pull request for this but it seems nobody has done anything 
with it so far: https://github.com/elasticsearch/logstash/issues/1367

Jilles

On Saturday, August 23, 2014 2:51:18 PM UTC+2, 凌波清风 wrote:

 Hello, 
 I also happen that you encounter this problem, the situation happened 
 to me is that this error occurs in the morning every day. You do not know 
 how to solve, hoping to give some help.

 thx.

 在 2014年7月18日星期五UTC+8下午8时56分54秒,Alexandre Fricker写道:

 Everithing was working fine until 4 h this morning when Logstash stop 
 send new logs to Elasticsearch and when I stop then restart the losgstash 
 process
 it reprocess a bulk of new log lines and when it start to send it to 
 Elasticserch it start writing this message again and again

 {:timestamp=2014-07-18T09:46:29.593000+0200, :message=Failed to 
 flush outgoing items, :outgoing_count=86, :exception=#RuntimeError: 
 Non-OK response code from Elasticsearch: 404, 
 :backtrace=[/soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:127:in
  
 `bulk_ftw', 
 /soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:80:in `bulk', 
 /soft/sth/lib/logstash/outputs/elasticsearch.rb:321:in `flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in
  
 `buffer_flush', org/jruby/RubyHash.java:1339:in `each', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in
  
 `buffer_flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in
  
 `buffer_flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in
  
 `buffer_initialize', org/jruby/RubyKernel.java:1521:in `loop', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:110:in
  
 `buffer_initialize'], :level=:warn}

 But when I check Elastisearch status in Elastisearch HQ everything is 
 Green and OK

 From the day beafore nothing change except that I added a new type of 
 data but only 15 logs every 1 minute



-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/5fd9e3e2-c38b-4678-995a-80787375267f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Logstash stop communicating with Elasticsearch

2014-08-23 Thread 凌波清风
Hello, 
I also happen that you encounter this problem, the situation happened 
to me is that this error occurs in the morning every day. You do not know 
how to solve, hoping to give some help.

thx.

在 2014年7月18日星期五UTC+8下午8时56分54秒,Alexandre Fricker写道:

 Everithing was working fine until 4 h this morning when Logstash stop send 
 new logs to Elasticsearch and when I stop then restart the losgstash process
 it reprocess a bulk of new log lines and when it start to send it to 
 Elasticserch it start writing this message again and again

 {:timestamp=2014-07-18T09:46:29.593000+0200, :message=Failed to flush 
 outgoing items, :outgoing_count=86, :exception=#RuntimeError: Non-OK 
 response code from Elasticsearch: 404, 
 :backtrace=[/soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:127:in 
 `bulk_ftw', 
 /soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:80:in `bulk', 
 /soft/sth/lib/logstash/outputs/elasticsearch.rb:321:in `flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in 
 `buffer_flush', org/jruby/RubyHash.java:1339:in `each', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in 
 `buffer_flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in 
 `buffer_flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in 
 `buffer_initialize', org/jruby/RubyKernel.java:1521:in `loop', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:110:in 
 `buffer_initialize'], :level=:warn}

 But when I check Elastisearch status in Elastisearch HQ everything is 
 Green and OK

 From the day beafore nothing change except that I added a new type of data 
 but only 15 logs every 1 minute


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/0af908e5-0ba6-4480-8b24-cdad86bd20fe%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Logstash stop communicating with Elasticsearch

2014-07-18 Thread Alexandre Fricker
 

Everithing was working fine until 4 h this morning when Logstash stop send 
new logs to Elasticsearch and when I stop then restart the losgstash process
it reprocess a bulk of new log lines and when it start to send it to 
Elasticserch it start writing this message again and again

{:timestamp=2014-07-18T09:46:29.593000+0200, :message=Failed to flush 
outgoing items, :outgoing_count=86, :exception=#RuntimeError: Non-OK 
response code from Elasticsearch: 404, 
:backtrace=[/soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:127:in 
`bulk_ftw', 
/soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:80:in `bulk', 
/soft/sth/lib/logstash/outputs/elasticsearch.rb:321:in `flush', 
/soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in 
`buffer_flush', org/jruby/RubyHash.java:1339:in `each', 
/soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in 
`buffer_flush', 
/soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in 
`buffer_flush', 
/soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in 
`buffer_initialize', org/jruby/RubyKernel.java:1521:in `loop', 
/soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:110:in 
`buffer_initialize'], :level=:warn}

But when I check Elastisearch status in Elastisearch HQ everything is Green 
and OK

From the day beafore nothing change except that I added a new type of data 
but only 15 logs every 1 minute

-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/314a851d-118d-461a-947e-8881ac152118%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Logstash stop communicating with Elasticsearch

2014-07-18 Thread Alexandre Fricker
Après pas mal d'essai j'ai trouvé la cause, en fait suite à un problème 
d'horodatage de log, Logstash transmettait un evenement à stocker dans un 
index clos celui du 7/7/2014, je l'ai ouvert et la situation s'est 
débloquée aussitôt, j'ai juste 20h de log en retard à traiter, heureusement 
Redis à tenu le coup.

After many search un try, I finaly found the route cause, it was a mistake 
in timestamping in the log that make Logstash try to send an event for the 
index of 07/07/2014 which was closed, I reopen it and everything work fine 
again, I just have 20h hours of old log to process, fortunatly Redis could 
hold all these.

Le vendredi 18 juillet 2014 14:56:54 UTC+2, Alexandre Fricker a écrit :

 Everithing was working fine until 4 h this morning when Logstash stop send 
 new logs to Elasticsearch and when I stop then restart the losgstash process
 it reprocess a bulk of new log lines and when it start to send it to 
 Elasticserch it start writing this message again and again

 {:timestamp=2014-07-18T09:46:29.593000+0200, :message=Failed to flush 
 outgoing items, :outgoing_count=86, :exception=#RuntimeError: Non-OK 
 response code from Elasticsearch: 404, 
 :backtrace=[/soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:127:in 
 `bulk_ftw', 
 /soft/sth/lib/logstash/outputs/elasticsearch/protocol.rb:80:in `bulk', 
 /soft/sth/lib/logstash/outputs/elasticsearch.rb:321:in `flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in 
 `buffer_flush', org/jruby/RubyHash.java:1339:in `each', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in 
 `buffer_flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in 
 `buffer_flush', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in 
 `buffer_initialize', org/jruby/RubyKernel.java:1521:in `loop', 
 /soft/sth/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:110:in 
 `buffer_initialize'], :level=:warn}

 But when I check Elastisearch status in Elastisearch HQ everything is 
 Green and OK

 From the day beafore nothing change except that I added a new type of data 
 but only 15 logs every 1 minute


-- 
You received this message because you are subscribed to the Google Groups 
elasticsearch group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6fd734fb-79e7-4047-898b-0b8587cf7d49%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.