Re: geo_point mapped field is returned as string in java api
nevermind, i solved it by doing something like this - GeoPoint latLng = GeoPoint.parseFromLatLon((String) sourceMap.get(lat_lng)); at the time of indexing, i am passing as lat,lng earlier i was passing as GeoPoint but that caused a major problem and messed up my mapping. When i read the document and then change some other field and do index, then it would raise this exception - ElasticsearchParseException(field must be either ' + LATITUDE + ', ' + LONGITUDE + ' or ' + GEOHASH + '); so i figured that indexing by passing the string (even though mapping is geo_point) works. thanks On Thu, Dec 4, 2014 at 6:32 AM, Nicholas Knize nicholas.kn...@elasticsearch.com wrote: Can you post a code example from your use case for how you're inserting, retrieving, and reading the documents? - Nick On Tuesday, December 2, 2014 2:16:30 PM UTC-6, T Vinod Gupta wrote: has anyone seen this problem? my mapping says that the field is of type geo_point. but when i read documents using java api and get the sourcemap, the type of the field is string and i can't cast it to a GeoPoint. ... lat_lng : { type : geo_point, lat_lon : true }, .. thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/b5ed03a4-eaad-491b-9856-087ae3ab2587%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/b5ed03a4-eaad-491b-9856-087ae3ab2587%40googlegroups.com?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yvDyp5uO_bhj%3DFwNtNB_fPu9sHpbKfb-OMr4hBygL8bfQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
geo_point mapped field is returned as string in java api
has anyone seen this problem? my mapping says that the field is of type geo_point. but when i read documents using java api and get the sourcemap, the type of the field is string and i can't cast it to a GeoPoint. ... lat_lng : { type : geo_point, lat_lon : true }, .. thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4ysRanZ%3DsO8zKJjDDx4tpSnG1zcXnXhHB4cfnbOyQkvXoQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: IndicesOptions ignoreUnavailable not working
i got no success and no responses from anybody.. im still clueless on this one.. is this a bug? thanks On Tue, Nov 11, 2014 at 4:23 PM, Girish Sastry g.sas...@gmail.com wrote: I'm also running into this issue. Is this expected behavior for `ignoreUnavailable`? On Wednesday, July 9, 2014 12:54:32 AM UTC-7, T Vinod Gupta wrote: Hi, Im on ES 1.2.1 and the below usage is not working for me. even if i pass the option of ignoring unavailable or closed indices, i get an error response (ClusterBlockException[blocked by: [FORBIDDEN/4/index closed];]). Is my usage wrong? IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, false, false); MultiSearchRequestBuilder multiSearchRequestBuilder = client.prepareMultiSearch().setIndicesOptions( indicesOptions); SearchRequestBuilder searchRequestBuilder = client.prepareSearch(...).setIndicesOptions( indicesOptions); multiSearchRequestBuilder = multiSearchRequestBuilder.add( searchRequestBuilder); MultiSearchResponse multiSearchResponse = multiSearchRequestBuilder.execute().actionGet(); this is the multi search response in the debugger - multiSearchResponse = (org.elasticsearch.action.search.MultiSearchResponse) { responses : [ { error : ClusterBlockException[blocked by: [FORBIDDEN/4/index closed];] } ] } thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/be091030-13a6-4d65-a360-7e37b13c1b34%40googlegroups.com https://groups.google.com/d/msgid/elasticsearch/be091030-13a6-4d65-a360-7e37b13c1b34%40googlegroups.com?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4ysB75RG6ML%3DEt0KsVZfqmdTs6T%3DFrKW4hDEa5HgtLkp%2Bg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
IndicesOptions ignoreUnavailable not working
Hi, Im on ES 1.2.1 and the below usage is not working for me. even if i pass the option of ignoring unavailable or closed indices, i get an error response (ClusterBlockException[blocked by: [FORBIDDEN/4/index closed];]). Is my usage wrong? IndicesOptions indicesOptions = IndicesOptions.fromOptions(true, true, false, false); MultiSearchRequestBuilder multiSearchRequestBuilder = client.prepareMultiSearch().setIndicesOptions(indicesOptions); SearchRequestBuilder searchRequestBuilder = client.prepareSearch(...).setIndicesOptions(indicesOptions); multiSearchRequestBuilder = multiSearchRequestBuilder.add(searchRequestBuilder); MultiSearchResponse multiSearchResponse = multiSearchRequestBuilder.execute().actionGet(); this is the multi search response in the debugger - multiSearchResponse = (org.elasticsearch.action.search.MultiSearchResponse) { responses : [ { error : ClusterBlockException[blocked by: [FORBIDDEN/4/index closed];] } ] } thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yuPUn2TUw7n2FGoomczrUNjxb8%3DH5tqOEc1Yedt4A%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: getting average aggregation in java
thanks! On Tue, May 13, 2014 at 1:32 PM, David Pilato da...@pilato.fr wrote: Have a look here https://github.com/dadoonet/legacy-search/blob/06-compute/src/main/java/fr/pilato/demo/legacysearch/dao/ElasticsearchDao.java#L37 Should help. -- David ;-) Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs Le 13 mai 2014 à 22:21, T Vinod Gupta tvi...@readypulse.com a écrit : has anyone gotten average aggregation in java? how do you use the api to create the aggregation request? it is quite confusing.. i want to do the following in java - aggregations : { avg_grade : { avg : { field : score }}} thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yug%3Dep6%3DFs6m8%2B8ME1uv09N9ua%3DZUdsQxWp8TG2_02k-w%40mail.gmail.comhttps://groups.google.com/d/msgid/elasticsearch/CAHau4yug%3Dep6%3DFs6m8%2B8ME1uv09N9ua%3DZUdsQxWp8TG2_02k-w%40mail.gmail.com?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/D3079468-9639-4526-93BF-16EE195F94C4%40pilato.frhttps://groups.google.com/d/msgid/elasticsearch/D3079468-9639-4526-93BF-16EE195F94C4%40pilato.fr?utm_medium=emailutm_source=footer . For more options, visit https://groups.google.com/d/optout. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yvX4fWfOgbfD-7g9MByanvqc%3DJOdA2HLvb%2BUW-BhmL3FQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Time taken from issue closure on github to ES release?
Does someone have visibility into ES release process? I am desperately waiting for a release to come out that will fix the below issue. it says that the bug is fixed/closed 5 days ago.. https://github.com/elasticsearch/elasticsearch/issues/4887 MultiSearch hangs forever + EsRejectedExecutionException thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4ysHzFaiA9UPWh-w0xtYi6YKvr4aketskYgiVLdsvP6FkQ%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: OutOfMemoryError on marvel node brought down the production cluster
Thanks Boaz for the reply.. I was using the latest marvel 1.1 by the way. Looks like you need marvel for marvel! Actually, my marvel cluster got so messed up that no matter what i did it would show shard failures in the dashboard and nothing was functional. i actually had a 2 node cluster for marvel monitoring. and after restart, they never got out of red state. so i just gave up on my experimentation with marvel and abandoned it fully.. i probably will go back to bigdesk. any other alternatives that are good? thanks ps - my feedback to the marvel team would be to provide marvel as a service.. that will be huge! I noticed that the size of my data dir on marvel node was 37G just from a few days of monitoring. thats heavy. On Sat, Apr 19, 2014 at 1:05 AM, Boaz Leskes b.les...@gmail.com wrote: Hi, Regarding monitoring node sizing - you have to go through pretty much the same procedure as with your main cluster. See how much data it generates per day and montior the memory usage of the node while using marvel on a single day index. That would be the basis for you calculation. Based on that and the number of days of data you want to retain you can decide how many nodes you need and how much memory each should get. BTW - make sure you use the latest version of marvel (1.1) - it has a way smaller data signature. Regarding error on you main production cluster. I'm a bit puzzled but the log output as the events are pretty far apart. It starts by a timeout of the marvel agent, 6 hours later it failed to connect (in between it seems everything is fine). Almsot 13 hours later the node has had an OOM (after which you have restarted it right? it has a different name). Then 40m later the log shows that another node (10.183.42.216) is under pressure and rejecting searchers. I'm not sure the first part is related to the second part. Can you share your marvel chart of JVM memory regarding the Darkoth node? it seems your main cluster is also under memory pressure. Cheers, Boaz On Thursday, April 17, 2014 10:08:04 PM UTC+2, T Vinod Gupta wrote: hi, in my setup, marvel node is different from production cluster.. the production nodes send data to marvel node.. marvel node had OOM exception. this brings me to the quesiton, how much heap does it need? i ran with default config. in my prod cluster, i have a load balancer which is no data node. it runs with just 2GB heap. due to marvel failure, this node was getting timeouts and for some strange reason went down. what are the best practices here? how can i avoid this in the future? marvel node - [2014-04-17 09:13:33,715][WARN ][index.engine.internal] [Gorilla-Man] [.marvel-2014.04.17][0] failed engine java.lang.OutOfMemoryError: Java heap space [2014-04-17 09:13:46,890][ERROR][index.engine.internal] [Gorilla-Man] [.marvel-2014.04.17][0] failed to acquire searcher, source search_factory org.apache.lucene.store.AlreadyClosedException: this ReferenceManager is closed at org.apache.lucene.search.ReferenceManager.acquire( ReferenceManager.java:98) ... ES LB node - [2014-04-17 00:01:00,567][ERROR][marvel.agent.exporter] [Darkoth] create fai lure (index:[.marvel-2014.04.16] type: [node_stats]): UnavailableShardsException [[.marvel-2014.04.16][0] [2] shardIt, [0] active : Timeout waiting for [1m], req uest: org.elasticsearch.action.bulk.BulkShardRequest@5d9be928] [2014-04-17 06:41:46,975][ERROR][marvel.agent.exporter] [Darkoth] error conn ecting to [ip-10-68-145-124.ec2.internal:9200] java.net.SocketTimeoutException: connect timed out [2014-04-17 18:53:09,969][DEBUG][action.admin.cluster.node.info] [Darkoth] faile d to execute on node [L1f57myxQLK1SSRHRFcvFQ] java.lang.OutOfMemoryError: Java heap space [2014-04-17 19:35:05,805][DEBUG][action.search.type ] [Witchfire] [twitter _072013][0], node[5GNeFfbPTGi-1EccVvR7Nw], [P], s[STARTED]: Failed to execute [o rg.elasticsearch.action.search.SearchRequest@2f94d571] lastShard [true] org.elasticsearch.transport.RemoteTransportException: [Mauvais][inet[/ 10.183.42. 216:9300]][search/phase/query] Caused by: org.elasticsearch.common.util.concurrent. EsRejectedExecutionException : rejected execution (queue capacity 1000) on org.elasticsearch.transport. netty. MessageChannelHandler$RequestHandler@4c75d754 at org.elasticsearch.common.util.concurrent.EsAbortPolicy. rejectedExecut ion(EsAbortPolicy.java:62) -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/614c3f0e-6aa4-4848-9f47-1a9b93e536f5%40googlegroups.comhttps://groups.google.com/d/msgid/elasticsearch/614c3f0e-6aa4-4848-9f47-1a9b93e536f5%40googlegroups.com?utm_medium=emailutm_source=footer . For more
OutOfMemoryError on marvel node brought down the production cluster
hi, in my setup, marvel node is different from production cluster.. the production nodes send data to marvel node.. marvel node had OOM exception. this brings me to the quesiton, how much heap does it need? i ran with default config. in my prod cluster, i have a load balancer which is no data node. it runs with just 2GB heap. due to marvel failure, this node was getting timeouts and for some strange reason went down. what are the best practices here? how can i avoid this in the future? marvel node - [2014-04-17 09:13:33,715][WARN ][index.engine.internal] [Gorilla-Man] [.marvel-2014.04.17][0] failed engine java.lang.OutOfMemoryError: Java heap space [2014-04-17 09:13:46,890][ERROR][index.engine.internal] [Gorilla-Man] [.marvel-2014.04.17][0] failed to acquire searcher, source search_factory org.apache.lucene.store.AlreadyClosedException: this ReferenceManager is closed at org.apache.lucene.search.ReferenceManager.acquire(ReferenceManager.java:98) ... ES LB node - [2014-04-17 00:01:00,567][ERROR][marvel.agent.exporter] [Darkoth] create fai lure (index:[.marvel-2014.04.16] type: [node_stats]): UnavailableShardsException [[.marvel-2014.04.16][0] [2] shardIt, [0] active : Timeout waiting for [1m], req uest: org.elasticsearch.action.bulk.BulkShardRequest@5d9be928] [2014-04-17 06:41:46,975][ERROR][marvel.agent.exporter] [Darkoth] error conn ecting to [ip-10-68-145-124.ec2.internal:9200] java.net.SocketTimeoutException: connect timed out [2014-04-17 18:53:09,969][DEBUG][action.admin.cluster.node.info] [Darkoth] faile d to execute on node [L1f57myxQLK1SSRHRFcvFQ] java.lang.OutOfMemoryError: Java heap space [2014-04-17 19:35:05,805][DEBUG][action.search.type ] [Witchfire] [twitter _072013][0], node[5GNeFfbPTGi-1EccVvR7Nw], [P], s[STARTED]: Failed to execute [o rg.elasticsearch.action.search.SearchRequest@2f94d571] lastShard [true] org.elasticsearch.transport.RemoteTransportException: [Mauvais][inet[/ 10.183.42. 216:9300]][search/phase/query] Caused by: org.elasticsearch.common.util.concurrent.EsRejectedExecutionException : rejected execution (queue capacity 1000) on org.elasticsearch.transport.netty. MessageChannelHandler$RequestHandler@4c75d754 at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecut ion(EsAbortPolicy.java:62) -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yvYsVO%2BbSk_U0cU7%3Di7G4FFgqwHQo_1as%3DezM9t20TRuA%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
configuring multi node ES setup in aws to withstand AZ going down
hi, how does ES ensure fail over scenario to work when the entire AZ goes down in AWS? Or is there something that I need to do to make it work? lets say i have a 4 node cluster with number of replicas set to 1 and i have 2 nodes in one AZ and remaining 2 in another AZ. what if 1st AZ goes down? how do we ensure full availability of all shards in such a case? because technically it is possible that there might be bunch of shards who primary and replica copy are both in the 1st AZ. how does ES recover then? any best practices here? also, i didn't mention that i have a 5th node which is a no-data ELB in AZ1, lets say. what is the recommendation here to also maintain a fallback ELB in AZ2? how can i configure my java and ruby clients with the 2 ELBs? thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4ysZ%2Bh%2B7CXYW5i0V5e0jbHyG1QCGG55P1ueJ352FztFmhg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
typical ranges in marvel metrics
hi, i just installed marvel on my production cluster to try it out and i have bunch of questions. can someone help? 1) for the most important graphs, like jvm memory, load, query rate, etc what should be the ideal range or high water mark? i mean if i were to determine if i need to add more nodes, what should be my criteria? 2) what does load(1M) mean? i couldn't find a definition. 3) where is search latency metric? that is most important to me. also metrics on slow queries. 4) i have bunch of background processes that are doing queries and then updating documents. but i care more about user search latency. is there a way to create distinction by source IP or something (all user queries will come from web server nodes) thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yuSrFT7FwLrxFAyFVUtGtpiJcrtkOGoF8m_peg138sDfw%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
ES alerting mechanism for failure scenarios, high latency situations
is there a plugin or api support for monitoring ES key metrics and alerting the dev ops about situations when some node in a cluster fails or there is a spike in latency due to whatever reason? what are the best practices here and what do people usually do? thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yv9L%2B5zXtDQcNKmK-b_30Q2MdrTtPjHUWsDYKEgFX8hnQ%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
Re: engine failure, message [OutOfMemoryError[unable to create new native thread]]
thanks for your response Jörg, somehow missed replying earlier. for some strange reason, the max threads setting was reset when i did a reboot.. so i had to set it back to a high number. On Tue, Feb 11, 2014 at 12:10 AM, joergpra...@gmail.com joergpra...@gmail.com wrote: Your user ran out of thread/process space. This is reported as OOM in Java. You can check the nproc entry in /etc/security.d/limits.conf for maximum settings and compare this with the process table. The OS settings regarding threads are usually ok and should not be modified. Check if you have modified ES default settings regarding the thread pools, and revert this changes to the default settings. If this does not help, you should upgrade from 0.90.6 to 0.90.11 Jörg On Tue, Feb 11, 2014 at 6:45 AM, T Vinod Gupta tvi...@readypulse.comwrote: hi, i had a stable ES cluster on aws ec2 instances till a week ago.. and i don't know whats going on and my cluster keeps getting into a bad state every few hours. the error says OOM but i know that that is not the reason. the instance has enough heap space left. im running ES 0.90.6 version and giving half the ram (8gb) to ES process. and i see these messages (the same message kind of) in the logs on all the machines in the cluster. [2014-02-11 03:17:39,936][WARN ][cluster.action.shard ] [Star-Dancer] [facebook_022014][1] sending failed shard for [facebook_022014][1], node[zO9Pc1GNSuiVMA_Kn2b3UQ], [R], s[STARTED], indexUUID [qN3CUSfVS-m2KlgQQtOqxg], reason [engine failure, message [OutOfMemoryError[unable to create new native thread]]] any ideas on how to debug this or how to figure out whats causing this would be really helpful. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGnDrE-yZnyFgUbMks84KVsyB%3Dp_9UGvQ_DmUo5Diub0g%40mail.gmail.com . For more options, visit https://groups.google.com/groups/opt_out. -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4yuipzRgx-YmzEzjVJmyuE%3DXkTTZm08pv0-Yp5Lv5Xc97A%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
long GC pauses but only one 1 host in the cluster
im seeing this consistently happen on only 1 host in my cluster. the other hosts don't have this problem.. what could be the reason and whats the remedy? im running ES on a ec2 m1.xlarge host - 16GB ram on the machine and i allocate 8GB to ES. e.g. [2014-02-25 09:14:38,726][WARN ][monitor.jvm ] [Lunatica] [gc][ParNew][1188745][942327] duration [48.3s], collections [1]/[1.1m], total [48.3s]/[1d], memory [7.9gb]-[6.9gb]/[7.9gb], all_pools {[Code Cache] [14.5mb]-[14.5mb]/[48mb]}{[Par Eden Space] [15.7mb]-[14.7mb]/[66.5mb]}{[Par Survivor Space] [8.3mb]-[0b]/[8.3mb]}{[CMS Old Gen] [7.8gb]-[6.9gb]/[7.9gb]}{[CMS Perm Gen] [46.8mb]-[46.8mb]/[168mb]} thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4ysuCGHKbgf9WaJ224fHdk0FZuCGG%3DTykAookVNYeGOARQ%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.
engine failure, message [OutOfMemoryError[unable to create new native thread]]
hi, i had a stable ES cluster on aws ec2 instances till a week ago.. and i don't know whats going on and my cluster keeps getting into a bad state every few hours. the error says OOM but i know that that is not the reason. the instance has enough heap space left. im running ES 0.90.6 version and giving half the ram (8gb) to ES process. and i see these messages (the same message kind of) in the logs on all the machines in the cluster. [2014-02-11 03:17:39,936][WARN ][cluster.action.shard ] [Star-Dancer] [facebook_022014][1] sending failed shard for [facebook_022014][1], node[zO9Pc1GNSuiVMA_Kn2b3UQ], [R], s[STARTED], indexUUID [qN3CUSfVS-m2KlgQQtOqxg], reason [engine failure, message [OutOfMemoryError[unable to create new native thread]]] any ideas on how to debug this or how to figure out whats causing this would be really helpful. thanks -- You received this message because you are subscribed to the Google Groups elasticsearch group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAHau4ytfEHERAsVcGFc-R8yePmw_KaN%3DJDV-cEwm9VK2VLgp-Q%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.