I assume search actions got stuck and block the subsequent ones, which results in the search queue filling up. Maybe the cause is printed in the server logs.
Setting replica to 0 with just one node helps to fix the 15 shards/30 total shards count but that is an unrelated story. Jörg On Fri, Feb 20, 2015 at 10:04 PM, Mark Walkom <markwal...@gmail.com> wrote: > It means your cluster is probably overloaded. > Your missing shards are probably replicas, which will never be assigned > with a one node cluster, head should show these. > > What is that massive spike right before the end of the graph? Are you also > monitoring things like load and other OS level stats? > > On 21 February 2015 at 03:32, Christopher Bourez < > christopher.bou...@gmail.com> wrote: > >> I'm having the following problem : >> >> "SearchPhaseExecutionException[Failed to execute phase [query], all >> shards failed; shardFailures {[rlrweRJAQJqaoKfFB8il5A][stt_prod][3]: >> EsRejectedExecutionException[rejected execution (queue capacity 1000) on >> org.elasticsearch.action.search.type.TransportSearchTypeAction >> >> It sounds very strange; when I restarted the server, it worked fine again. >> >> What could happen ? >> >> Here is my configuration : >> - ES version is 1.0.1 >> - I have 3 indexes, of respective size 2.5G, 1.7G and 250M, each one has >> 5 shards >> - the cluster is one only one instance (solo) >> - the state of the cluster says 15 successful shard, 0 failed shard and >> 30 total shards (where are the 15 shards missing ?) >> - in my settings, mlockall is set to true >> - I enabled script.disable_dynamic: false, installed plugins _head and >> action-updatebyquery >> - ES heap size is correctly set to 50% by the recipe which I can confirm >> using top command : >> 5320 elastic+ 20 0 9.918g 4.788g 72980 S 7.6 65.3 29:49.42 java >> - I'm using only 30% of disk capacity >> >> My traffic is not more than 125 requests per minutes : >> >> >> <https://lh5.googleusercontent.com/-DIPwVUIx868/VOdheEDlxvI/AAAAAAAADEk/dTL9V0-FPW0/s1600/requests.png> >> >> So if I understand well, each request can live 30s, how come I have a >> queue of 1000 ?! >> Can ES save the requests in the queue while the shards have failed ? >> Why do the shards do not come back ? >> >> Thanks for your help (I'm not using ES usually, more Solr or CloudSearch) >> >> (I also posted it here : >> https://github.com/elasticsearch/elasticsearch/issues/9792 ) >> >> -- >> You received this message because you are subscribed to the Google Groups >> "elasticsearch" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to elasticsearch+unsubscr...@googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/elasticsearch/33bfd4c7-4388-4e02-8e96-d8bb8cdd17ca%40googlegroups.com >> <https://groups.google.com/d/msgid/elasticsearch/33bfd4c7-4388-4e02-8e96-d8bb8cdd17ca%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> For more options, visit https://groups.google.com/d/optout. >> > > -- > You received this message because you are subscribed to the Google Groups > "elasticsearch" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/CAEYi1X9Q9eE7Xg-t_ZY-27zmgXZ1dCrRtMfjvyg2aAZX20vmAg%40mail.gmail.com > <https://groups.google.com/d/msgid/elasticsearch/CAEYi1X9Q9eE7Xg-t_ZY-27zmgXZ1dCrRtMfjvyg2aAZX20vmAg%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoHse40C_-hUfQynF2c_mjWgPtSZdssgtea8-zKbWEv0Dw%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.