Re: stuck thread problem?
We have seen a similar issue in our cluster (CPU usage and search time suddenly went up slowly for the master node over a period of one day, until we restarted). Is there a easy way to confirm that it's indeed the same issue mentioned here? Below is the output of our hot threads on this node (version 0.90.12): 85.8% (857.7ms out of 1s) cpu usage by thread 'elasticsearch[cluster1][search][T#3]' 8/10 snapshots sharing following 30 elements java.lang.ThreadLocal$ThreadLocalMap.set(ThreadLocal.java:429) java.lang.ThreadLocal$ThreadLocalMap.access$100(ThreadLocal.java:261) java.lang.ThreadLocal.set(ThreadLocal.java:183) org.elasticsearch.common.mvel2.optimizers.OptimizerFactory.clearThreadAccessorOptimizer(OptimizerFactory.java:114) org.elasticsearch.common.mvel2.MVELRuntime.execute(MVELRuntime.java:169) org.elasticsearch.common.mvel2.compiler.CompiledExpression.getDirectValue(CompiledExpression.java:123) org.elasticsearch.common.mvel2.compiler.CompiledExpression.getValue(CompiledExpression.java:119) org.elasticsearch.script.mvel.MvelScriptEngineService$MvelSearchScript.run(MvelScriptEngineService.java:191) org.elasticsearch.script.mvel.MvelScriptEngineService$MvelSearchScript.runAsDouble(MvelScriptEngineService.java:206) org.elasticsearch.common.lucene.search.function.ScriptScoreFunction.score(ScriptScoreFunction.java:54) org.elasticsearch.common.lucene.search.function.FunctionScoreQuery$CustomBoostFactorScorer.score(FunctionScoreQuery.java:175) org.apache.lucene.search.TopScoreDocCollector$OutOfOrderTopScoreDocCollector.collect(TopScoreDocCollector.java:140) org.apache.lucene.search.TimeLimitingCollector.collect(TimeLimitingCollector.java:153) org.apache.lucene.search.Scorer.score(Scorer.java:65) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269) org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:117) org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:244) org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202) org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80) org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216) org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:203) org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$2.run(TransportSearchTypeAction.java:186) java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) java.lang.Thread.run(Thread.java:744) 2/10 snapshots sharing following 20 elements org.apache.lucene.search.FilteredDocIdSet$1.get(FilteredDocIdSet.java:65) org.apache.lucene.search.FilteredQuery$QueryFirstScorer.nextDoc(FilteredQuery.java:178) org.elasticsearch.common.lucene.search.function.FunctionScoreQuery$CustomBoostFactorScorer.nextDoc(FunctionScoreQuery.java:169) org.apache.lucene.search.Scorer.score(Scorer.java:64) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:621) org.elasticsearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:162) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:491) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:448) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:281) org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:269) org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:117) org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:244) org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:202) org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:80) org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:216)
ES default - async or sync
What is the default replication policy in Elasticsearch? Does it push changes to replicas asynchronously or synchronously? Or does it use different mode with different operations. asynchronous replication http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-index_.html#index-replication By default, the index operation only returns after all shards within the replication group have indexed the document (sync replication). optimistic concurrency control http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/optimistic-concurrency-control.html#optimistic-concurrency-control Elasticsearch is distributed. When documents are created, updated or deleted, the new version of the document has to be replicated to other nodes in the cluster. Elasticsearch is also asynchronous and concurrent This book Ealsticsearch Definitive Guide, states under 'Creating, indexing and deleting' that ...default value for replication is synchronous. When discussing 'update' API it is stated that ...these changes are forwarded to the replica shards asynchronously https://github.com/elasticsearch/elasticsearch-definitive-guide/blob/master/040_Distributed_CRUD/15_Create_index_delete.asciidoc replication *** The default value for replication is sync. This causes the primary shard to wait for successful responses from the replica shards before returning. https://github.com/elasticsearch/elasticsearch-definitive-guide/blob/master/040_Distributed_CRUD/25_Partial_updates.asciidoc Document based replication When a primary shard forwards changes to its replica shards, it doesn’t forward the update request. Instead it forwards the new version of the full document. ***Remember that these changes are forwarded to the replica shards asynchronously and there is no guarantee that they will arrive in the same order that they were sent. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAO4Z2jCOfD8Mp8YL9WbPmf9aODX4D6fORZ-pSjhxFx1qweQsXA%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.