How can I fix the following error?
400053232e70101b488c org.elasticsearch.transport.ReceiveTimeoutTransportException: [][inet[172.22.4.9:9300]][discovery/zen/unicast] request_id [9936241] timed out after [3750ms] @400053232e70101b505c at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:356) @400053232e70101b5444 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) @400053232e70101bdcfc at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) @400053232e70101be4cc at java.lang.Thread.run(Thread.java:724) @400053232e742e0a2fac [2014-03-14 12:29:30,771][WARN ][discovery.zen.ping.unicast] [Alibar] failed to send ping to [[#zen_unicast_1#][inet[172.22.4.9:9300]]] @400053232e61193ca7bc [2014-03-14 12:27:21,015][DEBUG][action.search.type ] [Guardsman] [logstash-2013.12.31][4], node[0y56HHo4Rv6kNdDFpOaauQ], [R], s[STARTED]: Failed to execute [org.elasticsearch.action.search.SearchRequest@482b43cb] lastShard [true] @400053232e61193cb75c org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution (queue capacity 1000) on org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4@595963a1 @400053232e61193cbf2c at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62) @400053232e61193d1134 at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) @400053232e61193d151c at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) @400053232e61193d1904 at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:289) @400053232e61193d1cec at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$3.onFailure(TransportSearchTypeAction.java:224) @400053232e61193d3074 at org.elasticsearch.search.action.SearchServiceTransportAction$4.handleException(SearchServiceTransportAction.java:222) @400053232e61193d4bcc at org.elasticsearch.transport.netty.MessageChannelHandler.handleException(MessageChannelHandler.java:181) @400053232e61193d4fb4 at org.elasticsearch.transport.netty.MessageChannelHandler.handlerResponseError(MessageChannelHandler.java:171) @400053232e61193d4fb4 at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:123) @400053232e61193d5f54 at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) @400053232e61193d633c at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) @400053232e61193d6724 at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) @400053232e61193d76c4 at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) @400053232e61193d7aac at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) @400053232e61193d7e94 at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) @400053232e61193dccb4 at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310) @400053232e61193dd09c at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) @400053232e61193dd484 at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) @400053232e61193de424 at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) @400053232e61193de80c at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) @400053232e61193de80c at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) @400053232e61193debf4 at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) @400053232e61193df7ac at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) @400053232e61193dfb94 at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318) @400053232e61193dff7c at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) @400053232e61193e16ec at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) @400053232e61193e1ad4 at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) @400053232e61193e1ebc at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) @4
Re: My ES stuck once a week with no reason?
On usual days I am seeing this log all the time in only one box: @4000532216d90c452aac [2014-03-13 16:36:31,205][DEBUG][action.admin.cluster.stats] [Lloigoroth] failed to execute on node [Mo2-u0RSQT6qqbMjW1CWag] @4000532216d90c45327c org.elasticsearch.transport.RemoteTransportException: [Ketch, Dan][inet[/172.22.4.23:9300]][cluster/stats/n] @4000532216d90c453664 Caused by: org.elasticsearch.transport.ActionNotFoundTransportException: No handler for action [cluster/stats/n] @4000532216d90c453a4c at org.elasticsearch.transport.netty.MessageChannelHandler.handleRequest(MessageChannelHandler.java:205) @4000532216d90c45809c at org.elasticsearch.transport.netty.MessageChannelHandler.messageReceived(MessageChannelHandler.java:108) @4000532216d90c458484 at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) @4000532216d90c45886c at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) @4000532216d90c45980c at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) @4000532216d90c459bf4 at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296) @4000532216d90c45af7c at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462) @4000532216d90c45b364 at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443) @4000532216d90c45b74c at org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303) @4000532216d90c45c304 at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) @4000532216d90c45c6ec at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) @4000532216d90c45cad4 at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) @4000532216d90c45da74 at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74) @4000532216d90c45da74 at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) @4000532216d90c45de5c at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) @4000532216d90c45f1e4 at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268) @4000532216d90c45f5cc at org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255) @4000532216d90c45f5cc at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) @4000532216d90c45f9b4 at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) @4000532216d90c46056c at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) @4000532216d90c460954 at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) @4000532216d90c460d3c at org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) @4000532216d90c460d3c at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) @4000532216d90c466714 at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) @4000532216d90c466afc at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) @4000532216d90c466afc at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) @4000532216d90c46826c at java.lang.Thread.run(Thread.java:724) On Thursday, March 13, 2014 1:35:25 PM UTC-7, Khasan Bold wrote: > > I have a 4 box ES installed, the version that I am using is 0.90.10 but it > fails once in a week. What I am getting is 50X error in kibana. When I > check the log one of the nodes are stuck. It is fine after restart. The > memories are fine for them. What else can I check ? > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/8a19ffd0-dd80-4334-9f57-7e3641688b52%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
My ES stuck once a week with no reason?
I have a 4 box ES installed, the version that I am using is 0.90.10 but it fails once in a week. What I am getting is 50X error in kibana. When I check the log one of the nodes are stuck. It is fine after restart. The memories are fine for them. What else can I check ? -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/555d73de-1f65-4ca0-ba19-8917e2cc645f%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.