Re: elasticsearch high cpu usage every hourly

2015-03-31 Thread Aaron Mefford
>From what I can see in your graphs I noticed two things.  You seem to have 
a spike in search requests at that time, a spike in http traffic, and a 
cache eviction right at the beginning of it.  

Are you certain you don't have an external user with a cron job that runs 
at the top of the hour?  Perhaps a large scan and scroll query that dumps 
alot of data?

Take a look at your network graphs to see if you have a correlated spike in 
traffic to your ElasticSearch cluster.  I wouldn't expect with a cluster 
that size that you don't have any users, probably quite a few users.  It 
would not be unreasonable to expect that one such user is doing something 
beyond what you had intended and causing stress on your system.


On Monday, March 30, 2015 at 8:37:54 PM UTC-6, vincent Park wrote:
>
> we have 8 clustered nodes and each nodes have 1 replica. 
> total document size is about 4GB and 1,984,173 docs. 
>
> I was suffering from very high CPU usage 80%~90% every hourly. 
> It is held for 5 min. 
>
> there is no other process except es on each server. 
> there is no other cron job even at that time. 
>
> I thought there are something wrong with es process. 
> maybe external attacks or gc problem.. I don't know. 
>
> It happened every hourly. 
> I don't know what's going on elasticsearch at this time!! 
> somebody help me, tell me what happened in there. please.. 
>
>
> $ ./elasticsearch -v 
> Version: 1.4.2, Build: 927caff/2014-12-16T14:11:12Z, JVM: 1.7.0_75 
> 
> $ java -version 
> java version "1.7.0_75" 
> Java(TM) SE Runtime Environment (build 1.7.0_75-b13) 
> Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode) 
>
> and I installed plugins - HQ, bigdesk, head, kopf, sense 
>
> heres bigdesk graphs at cpu peak time: 
> <
> http://elasticsearch-users.115913.n3.nabble.com/file/n4072788/es_cpu_high.png>
>  
>
>
>
>
> -- 
> View this message in context: 
> http://elasticsearch-users.115913.n3.nabble.com/elasticsearch-high-cpu-usage-every-hourly-tp4072788.html
>  
> Sent from the ElasticSearch Users mailing list archive at Nabble.com. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/aca13c50-3c3a-4630-a941-4116dc31e55e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch high cpu usage every hourly

2015-03-30 Thread vincent park
we have 8 clustered nodes and each nodes have 1 replica. 
total document size is about 4GB and 1,984,173 docs.

I was suffering from very high CPU usage 80%~90% every hourly.
It is held for 5 min.

there is no other process except es on each server.
there is no other cron job even at that time.

I thought there are something wrong with es process.
maybe external attacks or gc problem.. I don't know.

It happened every hourly.
I don't know what's going on elasticsearch at this time!! 
somebody help me, tell me what happened in there. please..


$ ./elasticsearch -v 
Version: 1.4.2, Build: 927caff/2014-12-16T14:11:12Z, JVM: 1.7.0_75 

$ java -version 
java version "1.7.0_75" 
Java(TM) SE Runtime Environment (build 1.7.0_75-b13) 
Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode) 

and I installed plugins - HQ, bigdesk, head, kopf, sense

heres bigdesk graphs at cpu peak time:
<http://elasticsearch-users.115913.n3.nabble.com/file/n4072788/es_cpu_high.png> 



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/elasticsearch-high-cpu-usage-every-hourly-tp4072788.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1427630250044-4072788.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


Re: elasticsearch high cpu usage

2014-07-03 Thread Michael Hart
;  
>
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>  
>
> at 
> org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
>  
>
> at 
> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>  
>
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>  
>
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>  
>
> at 
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
>  
>
> at 
> org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
>  
>
> at 
> org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
>  
>
> at 
> org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
>  
>
> at 
> org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
>  
>
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>  
>
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
>  
>
> at 
> org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
>  
>
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
>  
>
> at 
> org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
>  
>
> at 
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
>  
>
> at 
> org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
>  
>
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
>  
>
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
>  
>
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
>  
>
> at 
> org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
>  
>
> at 
> org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
>  
>
> at 
> org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
>  
>
> at 
> org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
>  
>
> at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
> at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
> at java.lang.Thread.run(Unknown Source) 
>
>
>
> -- 
> View this message in context: 
> http://elasticsearch-users.115913.n3.nabble.com/elasticsearch-high-cpu-usage-tp4059189.html
>  
> Sent from the ElasticSearch Users mailing list archive at Nabble.com. 
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/e0b55541-8f26-4ad3-80be-eb878a8f3844%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch high cpu usage

2014-07-03 Thread vincent park
ipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
at java.lang.Thread.run(Unknown Source)



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/elasticsearch-high-cpu-usage-tp4059186.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1404354112745-4059186.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch high cpu usage

2014-07-03 Thread vincent park
ived(ReplayingDecoder.java:435)
 
at
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 
at
org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:74)
 
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559)
 
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:268)
 
at
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:255)
 
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
 
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108)
 
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:318)
 
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89)
 
at
org.elasticsearch.common.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
 
at
org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
 
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
 
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) 
at java.lang.Thread.run(Unknown Source)



--
View this message in context: 
http://elasticsearch-users.115913.n3.nabble.com/elasticsearch-high-cpu-usage-tp4059189.html
Sent from the ElasticSearch Users mailing list archive at Nabble.com.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1404354751528-4059189.post%40n3.nabble.com.
For more options, visit https://groups.google.com/d/optout.


elasticsearch high cpu usage

2014-07-02 Thread vincent Park
Hi, 
I have 5 clustered nodes and each nodes have 1 replica. 
total document size is 216 M and 853,000 docs. 
I was suffering from very high CPU usage. 
every hours and every early morning about am 05:00 ~ am 09:00 
you can see my cacti graph. 

there is elasticsearch only on this server 

I thought there are something wrong with es process. 
but there is a few server request at cpu peak time. 
and there is no cron job even. 

every hours and every early morning about am 05:00 ~ am 09:00 
I don't know what's going on elasticsearch at this time!! 
somebody help me, tell me what happened in there. please.. 

$ ./elasticsearch -v 
Version: 1.1.1, Build: f1585f0/2014-04-16T14:27:12Z, JVM: 1.7.0_55 

$ java -version 
java version "1.7.0_55" 
Java(TM) SE Runtime Environment (build 1.7.0_55-b13) 
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode) 

and I installed plugins on elasticsearch: 
HQ, bigdesk, head, kopf, sense 

es log at cpu peak time: 



[2014-07-03 08:01:00,045][DEBUG][action.search.type   ] [node1] 
[search][4], node[GJjzCrLvQQ-ZRRoqL13MrQ], [P], s[STARTED]: Failed to 
execute [org.elasticsearch.action.search.SearchRequest@451f9e7c] lastShard 
[true] 
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: 
rejected execution (queue capacity 300) on 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4@68ab486b
 
at 
org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
 
at java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source) 
at java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source) 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:293)
 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:300)
 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:190)
 
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:59)
 
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:49)
 
at 
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
 
at 
org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:108)
 
at 
org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:43)
 
at 
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
 
at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:92) 
at 
org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:212) 
at 
org.elasticsearch.rest.action.search.RestSearchAction.handleRequest(RestSearchAction.java:98)
 
at 
org.elasticsearch.rest.RestController.executeHandler(RestController.java:159) 
at 
org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:142) 
at 
org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:121) 
at 
org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:83)
 
at 
org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:291)
 
at 
org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:43)
 
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 
at 
org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
 
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 
at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
 
at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
 
at 
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
 
at 
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
 
at 
org.elasticsearch.common.ne

elasticsearch high cpu usage

2014-07-02 Thread vincent Park
Hi, 
I have 5 clustered nodes and each nodes have 1 replica. 
total document size is 216 M and 853,000 docs. 
I was suffering from very high CPU usage. 
every hours and every early morning about am 05:00 ~ am 09:00 
you can see my cacti graph. 

there is elasticsearch only on this server 

I thought there are something wrong with es process. 
but there is a few server request at cpu peak time. 
and there is no cron job even. 

every hours and every early morning about am 05:00 ~ am 09:00 
I don't know what's going on elasticsearch at this time!! 
somebody help me, tell me what happened in there. please.. 

$ ./elasticsearch -v 
*Version: 1.1.1, Build: f1585f0/2014-04-16T14:27:12Z, JVM: 1.7.0_55*

$ java -version 


*java version "1.7.0_55" Java(TM) SE Runtime Environment (build 
1.7.0_55-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed 
mode)*

and I installed plugins on elasticsearch: 
*HQ, bigdesk, head, kopf, sense*

es log at cpu peak time: 



[2014-07-03 08:01:00,045][DEBUG][action.search.type   ] [node1] 
[search][4], node[GJjzCrLvQQ-ZRRoqL13MrQ], [P], s[STARTED]: Failed to 
execute [org.elasticsearch.action.search.SearchRequest@451f9e7c] lastShard 
[true] 
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: 
rejected execution (queue capacity 300) on 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4@68ab486b
 
at 
org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
 
at java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source) 
at java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source) 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:293)
 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:300)
 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:190)
 
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:59)
 
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:49)
 
at 
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
 
at 
org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:108)
 
at 
org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:43)
 
at 
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
 
at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:92) 
at 
org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:212) 
at 
org.elasticsearch.rest.action.search.RestSearchAction.handleRequest(RestSearchAction.java:98)
 
at 
org.elasticsearch.rest.RestController.executeHandler(RestController.java:159) 
at 
org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:142) 
at 
org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:121) 
at 
org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:83)
 
at 
org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:291)
 
at 
org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:43)
 
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 
at 
org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
 
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 
at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
 
at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
 
at 
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
 
at 
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
 
at 
org.elasticsearch.comm

elasticsearch high cpu usage

2014-07-02 Thread vincent Park
Hi, 
I have 5 clustered nodes and each nodes have 1 replica. 
total document size is 216 M and 853,000 docs. 
I was suffering from very high CPU usage. 
every hours and every early morning about am 05:00 ~ am 09:00 
you can see my cacti graph. 

there is elasticsearch only on this server 

I thought there are something wrong with es process. 
but there is a few server request at cpu peak time. 
and there is no cron job even. 

$ ./elasticsearch -v 
*Version: 1.1.1, Build: f1585f0/2014-04-16T14:27:12Z, JVM: 1.7.0_55*

$ java -version 


*java version "1.7.0_55" Java(TM) SE Runtime Environment (build 
1.7.0_55-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed 
mode)*

and I installed plugins on elasticsearch: 
*HQ, bigdesk, head, kopf, sense*

es log at cpu peak time: 



[2014-07-03 08:01:00,045][DEBUG][action.search.type   ] [node1] 
[search][4], node[GJjzCrLvQQ-ZRRoqL13MrQ], [P], s[STARTED]: Failed to 
execute [org.elasticsearch.action.search.SearchRequest@451f9e7c] lastShard 
[true] 
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: 
rejected execution (queue capacity 300) on 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4@68ab486b
 
at 
org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:62)
 
at java.util.concurrent.ThreadPoolExecutor.reject(Unknown Source) 
at java.util.concurrent.ThreadPoolExecutor.execute(Unknown Source) 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:293)
 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.onFirstPhaseResult(TransportSearchTypeAction.java:300)
 
at 
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:190)
 
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:59)
 
at 
org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:49)
 
at 
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
 
at 
org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:108)
 
at 
org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:43)
 
at 
org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:63)
 
at org.elasticsearch.client.node.NodeClient.execute(NodeClient.java:92) 
at 
org.elasticsearch.client.support.AbstractClient.search(AbstractClient.java:212) 
at 
org.elasticsearch.rest.action.search.RestSearchAction.handleRequest(RestSearchAction.java:98)
 
at 
org.elasticsearch.rest.RestController.executeHandler(RestController.java:159) 
at 
org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:142) 
at 
org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:121) 
at 
org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:83)
 
at 
org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:291)
 
at 
org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:43)
 
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 
at 
org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145)
 
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
 
at 
org.elasticsearch.common.netty.channel.Channels.fireMessageReceived(Channels.java:296)
 
at 
org.elasticsearch.common.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459)
 
at 
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536)
 
at 
org.elasticsearch.common.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435)
 
at 
org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
 
at 
org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream