Re: OutOfMemory Error occurred When Flush shard
Ok,If I upgrade how should I keep all old data 2015-03-02 16:36 GMT+08:00 Mark Walkom : > You should really upgrade! > > If you can, try deleting/closing some indices to reduce the load, also > given it's <1.X you can try disabling bloom filters. > > On 2 March 2015 at 19:01, xiaoliang tian wrote: > >> The Version is 0.90.13 >> >> There is Out of memory error occurred when flushing shard >> >> How could I resolve this error? >> >> Any suggestion to prevent it. >> >> Below is the cluster situation: >> 5 data node,1 master and 1 search node >> >> Dozens of index and each index has more than 100GB data >> >> Another problem is When someone try to query data there is connect >> time-out problem,what could cause time out? I think concurrency is ort of >> considered,Maybe due to the huge data? >> >> Plz help >> >> Below is OOM error >> [2015-03-01 07:23:24,023][WARN ][index.translog ] [Outlaw] >> [16494][4] failed to flush shard on translog threshold >> org.elasticsearch.index.engine.FlushFailedEngineException: [16494][4] >> Flush failed >> at >> org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:907) >> at >> org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) >> at >> org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:194) >> at >> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) >> at >> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) >> at java.lang.Thread.run(Thread.java:745) >> Caused by: java.lang.IllegalStateException: this writer hit an >> OutOfMemoryError; cannot commit >> at >> org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4354) >> at >> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2891) >> at >> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2984) >> at >> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2954) >> at >> org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:893) >> ... 5 more >> [2015-03-01 07:23:27,078][WARN ][cluster.action.shard ] [Outlaw] >> [16494][4] sending failed shard for [16494][4], >> node[z-YGubBGRe2afo5G8MBPkQ], [P], s[STARTED], indexUUID >> [9MjfwirySmWIbqT8clWDwQ], reason [engine failure, message >> [OutOfMemoryError[Java heap space]]] >> [2015-03-01 07:23:24,030][DEBUG][action.bulk ] [Outlaw] >> [16494][4] failed to execute bulk item (index) index >> {[16494][cs-us-east-1-logging-swc-rel][c22f6222-c146-4608-9dcc-c8846191c21a], >> source[{"version":"0.2","role":"es-data","from":"cs-us-east-1-logging-swc-rel","host":"ip-10-1-33-94-us-east-1-compute-internal","type":"log","time":1425092107803,"level":"system","text":" >> 27 disks \n2 partitions \n 47584725 total reads\n >>80364 merged reads\n 3159251537 read sectors\n586297450 milli >> reading\n634621170 writes\n 17059531 merged writes\n 49307983928 >> written sectors\n 2768108439 milli writing\n0 inprogress IO\n >> 426354 milli spent >> IO\n","state":"info","service":"snapshot","process":"VMstat","uid":"c22f6222-c146-4608-9dcc-c8846191c21a"}]} >> org.elasticsearch.index.engine.IndexFailedEngineException: [16494][4] >> Index failed for >> [cs-us-east-1-logging-swc-rel#c22f6222-c146-4608-9dcc-c8846191c21a] >> at >> org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:501) >> at >> org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:386) >> at >> org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:398) >> at >> org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:156) >> at >> org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556) >> at >> org.elasticsearch.action.support.replication.TransportShardReplicationOperati
Re: OutOfMemory Error occurred When Flush shard
About the time out error I found error log below,I don't know what that mean [2015-02-27 21:59:41,575][DEBUG][action.search.type ] [Selene] [487030] Failed to execute fetch phase org.elasticsearch.transport.RemoteTransportException: [Red Wolf][inet[/10.1.33.77:9300]][search/phase/fetch/id] Caused by: org.elasticsearch.search.SearchContextMissingException: No search context found for id [487030] at org.elasticsearch.search.SearchService.findContext(SearchService.java:460) at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:433) at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:728) at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:717) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) [2015-02-27 21:59:41,576][DEBUG][action.search.type ] [Selene] [475699] Failed to execute fetch phase org.elasticsearch.transport.RemoteTransportException: [S'byll][inet[/10.1.33.94:9300]][search/phase/fetch/id] Caused by: org.elasticsearch.search.SearchContextMissingException: No search context found for id [475699] at org.elasticsearch.search.SearchService.findContext(SearchService.java:460) at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:433) at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:728) at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:717) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) [2015-02-27 21:59:41,575][DEBUG][action.search.type ] [Selene] [487026] Failed to execute fetch phase org.elasticsearch.transport.RemoteTransportException: [Red Wolf][inet[/10.1.33.77:9300]][search/phase/fetch/id] Caused by: org.elasticsearch.search.SearchContextMissingException: No search context found for id [487026] at org.elasticsearch.search.SearchService.findContext(SearchService.java:460) at org.elasticsearch.search.SearchService.executeFetchPhase(SearchService.java:433) at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:728) at org.elasticsearch.search.action.SearchServiceTransportAction$SearchFetchByIdTransportHandler.messageReceived(SearchServiceTransportAction.java:717) at org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:270) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) [2015-02-27 21:59:41,575][DEBUG][action.search.type ] [Selene] [460892] Failed to execute fetch 在 2015年3月2日星期一 UTC+8下午4:01:48,xiaoliang tian写道: > > The Version is 0.90.13 > > There is Out of memory error occurred when flushing shard > > How could I resolve this error? > > Any suggestion to prevent it. > > Below is the cluster situation: > 5 data node,1 master and 1 search node > > Dozens of index and each index has more than 100GB data > > Another problem is When someone try to query data there is connect > time-out problem,what could cause time out? I think concurrency is ort of > considered,Maybe due to the huge data? > > Plz help > > Below is OOM error > [2015-03-01 07:23:24,023][WARN ][index.translog ] [Outlaw] > [16494][4] failed to flush shard on translog threshold > org.elasticsearch.index.engine.FlushFailedEngineException: [16494][4] > Flush failed > at > org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:907) > at > org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) > at > org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:194) > at > java.util.concurrent.ThreadPoolExecut
OutOfMemory Error occurred When Flush shard
The Version is 0.90.13 There is Out of memory error occurred when flushing shard How could I resolve this error? Any suggestion to prevent it. Below is the cluster situation: 5 data node,1 master and 1 search node Dozens of index and each index has more than 100GB data Another problem is When someone try to query data there is connect time-out problem,what could cause time out? I think concurrency is ort of considered,Maybe due to the huge data? Plz help Below is OOM error [2015-03-01 07:23:24,023][WARN ][index.translog ] [Outlaw] [16494][4] failed to flush shard on translog threshold org.elasticsearch.index.engine.FlushFailedEngineException: [16494][4] Flush failed at org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:907) at org.elasticsearch.index.shard.service.InternalIndexShard.flush(InternalIndexShard.java:563) at org.elasticsearch.index.translog.TranslogService$TranslogBasedFlush$1.run(TranslogService.java:194) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.IllegalStateException: this writer hit an OutOfMemoryError; cannot commit at org.apache.lucene.index.IndexWriter.startCommit(IndexWriter.java:4354) at org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:2891) at org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:2984) at org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:2954) at org.elasticsearch.index.engine.robin.RobinEngine.flush(RobinEngine.java:893) ... 5 more [2015-03-01 07:23:27,078][WARN ][cluster.action.shard ] [Outlaw] [16494][4] sending failed shard for [16494][4], node[z-YGubBGRe2afo5G8MBPkQ], [P], s[STARTED], indexUUID [9MjfwirySmWIbqT8clWDwQ], reason [engine failure, message [OutOfMemoryError[Java heap space]]] [2015-03-01 07:23:24,030][DEBUG][action.bulk ] [Outlaw] [16494][4] failed to execute bulk item (index) index {[16494][cs-us-east-1-logging-swc-rel][c22f6222-c146-4608-9dcc-c8846191c21a], source[{"version":"0.2","role":"es-data","from":"cs-us-east-1-logging-swc-rel","host":"ip-10-1-33-94-us-east-1-compute-internal","type":"log","time":1425092107803,"level":"system","text":" 27 disks \n2 partitions \n 47584725 total reads\n 80364 merged reads\n 3159251537 read sectors\n586297450 milli reading\n634621170 writes\n 17059531 merged writes\n 49307983928 written sectors\n 2768108439 milli writing\n0 inprogress IO\n 426354 milli spent IO\n","state":"info","service":"snapshot","process":"VMstat","uid":"c22f6222-c146-4608-9dcc-c8846191c21a"}]} org.elasticsearch.index.engine.IndexFailedEngineException: [16494][4] Index failed for [cs-us-east-1-logging-swc-rel#c22f6222-c146-4608-9dcc-c8846191c21a] at org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.java:501) at org.elasticsearch.index.shard.service.InternalIndexShard.index(InternalIndexShard.java:386) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardIndexOperation(TransportShardBulkAction.java:398) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:156) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Java heap space [2015-03-01 07:23:30,699][DEBUG][action.bulk ] [Outlaw] [16494][4], node[z-YGubBGRe2afo5G8MBPkQ], [P], s[STARTED]: Failed to execute [org.elasticsearch.action.bulk.BulkShardRequest@7fca9100] java.lang.NullPointerException at org.elasticsearch.action.bulk.TransportShardBulkAction.applyVersion(TransportShardBulkAction.java:640) at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:178) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:556) at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:426)
Re: Data is not saved equally in each datanode
Thanks,I got that 2014-12-27 4:54 GMT+08:00 Mark Walkom : > You really need to upgrade, 0.90.X is no longer supported! > > On 26 December 2014 at 17:19, Xiaoliang Tian > wrote: > >> Thanks,And M using 0.9.13.can it enable auto-balancing manually? >> >> 2014-12-26 14:17 GMT+08:00 Michael deMan (ES) : >> >>> >>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html >>> >>> I can’t remember when auto-balancing got enabled by default, I think >>> maybe 1.3.4. >>> >>> you can find out via api. >>> >>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html >>> >>> >>> >>> On Dec 25, 2014, at 10:15 PM, Xiaoliang Tian >>> wrote: >>> >>> which version? and what is the API url exactly >>> >>> 2014-12-26 12:44 GMT+08:00 Michael deMan (ES) : >>> >>>> Also, higher shards will help with the new indexes but not the existing >>>> ones. >>>> You can use the API to force ES to move shards off your ‘full’ disk >>>> over to the new one. >>>> Auto-balancing for data size should be on by default if you are running >>>> a newer version of ES. >>>> >>>> On Dec 25, 2014, at 8:13 PM, Michael deMan (ES) < >>>> elasticsea...@deman.com> wrote: >>>> >>>> Try increasing the number of shards - maybe to 20 or 40. >>>> >>>> On Dec 25, 2014, at 8:10 PM, Xiaoliang Tian >>>> wrote: >>>> >>>> index number depend on how many days past,the index name is the day >>>> epoch. because we use elasticseach for log storage >>>> shard number is 8 >>>> replica is 1 >>>> >>>> 2014-12-25 15:49 GMT+08:00 David Pilato : >>>> >>>>> How many index/shards/replicas do you have? >>>>> >>>>> -- >>>>> David ;-) >>>>> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs >>>>> >>>>> Le 25 déc. 2014 à 08:28, xiaoliang tian a >>>>> écrit : >>>>> >>>>> I also start a new data node(node 5),and there is new data come into >>>>> cluster continuously,but There is no any data in data node 5.I don't know >>>>> why,plz help >>>>> >>>>> 在 2014年12月25日星期四UTC+8上午10时33分29秒,xiaoliang tian写道: >>>>>> >>>>>> Hi,I got 4 data nodes,1 master node and 1 search node >>>>>> >>>>>> >>>>>> For example,At first,the data was equally saved in 4 data nodes >>>>>> node1 1.6TB >>>>>> node2 1.6TB >>>>>> node3 1.6TB >>>>>> node4 1.6TB >>>>>> >>>>>> Since My disk is 2TB and it is almost full,I delete some index to >>>>>> get more storage >>>>>> >>>>>> after deleting the data nodes are like below >>>>>> >>>>>> node1 1TB >>>>>> node2 1TB >>>>>> node3 1TB >>>>>> node4 1TB >>>>>> >>>>>> after a few days, I found the data is not equally saved in each node >>>>>> anymore >>>>>> >>>>>> node1 1.1TB >>>>>> node2 1.1TB >>>>>> node3 1.1TB >>>>>> node4 1.6TB >>>>>> >>>>>> node 4 is almost full >>>>>> I don't know why,and is there anyway to rebalance data in each data >>>>>> node >>>>>> >>>>>> >>>>> -- >>>>> You received this message because you are subscribed to the Google >>>>> Groups "elasticsearch" group. >>>>> To unsubscribe from this group and stop receiving emails from it, send >>>>> an email to elasticsearch+unsubscr...@googlegroups.com. >>>>> To view this discussion on the web visit >>>>> https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com >>>>> <https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com?utm_medium=email&utm_source=footer> >>>>> . >>>>> For more options, visit https://groups.google.com/d/optout. >>>>> >>>>> >>>>> -- >>>
Re: Data is not saved equally in each datanode
M so sure.I have to call reroute API to balance,The 5 the node just index new log data 2014-12-26 14:39 GMT+08:00 David Pilato : > So you have 16 shards (8 primary and 8 replicas). On a 5 nodes cluster, > this should rebalance automatically. > > Are you sure your 5th node actually joined the cluster ? > > > > -- > David ;-) > Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs > > Le 26 déc. 2014 à 07:21, Michael deMan (ES) a > écrit : > > I don’t know. > > On Dec 25, 2014, at 10:19 PM, Xiaoliang Tian > wrote: > > Thanks,And M using 0.9.13.can it enable auto-balancing manually? > > 2014-12-26 14:17 GMT+08:00 Michael deMan (ES) : > >> >> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html >> >> I can’t remember when auto-balancing got enabled by default, I think >> maybe 1.3.4. >> >> you can find out via api. >> >> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html >> >> >> >> On Dec 25, 2014, at 10:15 PM, Xiaoliang Tian >> wrote: >> >> which version? and what is the API url exactly >> >> 2014-12-26 12:44 GMT+08:00 Michael deMan (ES) : >> >>> Also, higher shards will help with the new indexes but not the existing >>> ones. >>> You can use the API to force ES to move shards off your ‘full’ disk over >>> to the new one. >>> Auto-balancing for data size should be on by default if you are running >>> a newer version of ES. >>> >>> On Dec 25, 2014, at 8:13 PM, Michael deMan (ES) >>> wrote: >>> >>> Try increasing the number of shards - maybe to 20 or 40. >>> >>> On Dec 25, 2014, at 8:10 PM, Xiaoliang Tian >>> wrote: >>> >>> index number depend on how many days past,the index name is the day >>> epoch. because we use elasticseach for log storage >>> shard number is 8 >>> replica is 1 >>> >>> 2014-12-25 15:49 GMT+08:00 David Pilato : >>> >>>> How many index/shards/replicas do you have? >>>> >>>> -- >>>> David ;-) >>>> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs >>>> >>>> Le 25 déc. 2014 à 08:28, xiaoliang tian a >>>> écrit : >>>> >>>> I also start a new data node(node 5),and there is new data come into >>>> cluster continuously,but There is no any data in data node 5.I don't know >>>> why,plz help >>>> >>>> 在 2014年12月25日星期四UTC+8上午10时33分29秒,xiaoliang tian写道: >>>>> >>>>> Hi,I got 4 data nodes,1 master node and 1 search node >>>>> >>>>> >>>>> For example,At first,the data was equally saved in 4 data nodes >>>>> node1 1.6TB >>>>> node2 1.6TB >>>>> node3 1.6TB >>>>> node4 1.6TB >>>>> >>>>> Since My disk is 2TB and it is almost full,I delete some index to get >>>>> more storage >>>>> >>>>> after deleting the data nodes are like below >>>>> >>>>> node1 1TB >>>>> node2 1TB >>>>> node3 1TB >>>>> node4 1TB >>>>> >>>>> after a few days, I found the data is not equally saved in each node >>>>> anymore >>>>> >>>>> node1 1.1TB >>>>> node2 1.1TB >>>>> node3 1.1TB >>>>> node4 1.6TB >>>>> >>>>> node 4 is almost full >>>>> I don't know why,and is there anyway to rebalance data in each data >>>>> node >>>>> >>>>> >>>> -- >>>> You received this message because you are subscribed to the Google >>>> Groups "elasticsearch" group. >>>> To unsubscribe from this group and stop receiving emails from it, send >>>> an email to elasticsearch+unsubscr...@googlegroups.com. >>>> To view this discussion on the web visit >>>> https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com >>>> <https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com?utm_medium=email&utm_source=footer> >>>> . >>>> For more options, visit https://groups.google.com/d/optout. >>>> >>>> >>>> -- >>>> You received this message because you are subscribed to a topic
Re: Data is not saved equally in each datanode
Thanks,And M using 0.9.13.can it enable auto-balancing manually? 2014-12-26 14:17 GMT+08:00 Michael deMan (ES) : > > http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html > > I can’t remember when auto-balancing got enabled by default, I think maybe > 1.3.4. > > you can find out via api. > > http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html > > > > On Dec 25, 2014, at 10:15 PM, Xiaoliang Tian > wrote: > > which version? and what is the API url exactly > > 2014-12-26 12:44 GMT+08:00 Michael deMan (ES) : > >> Also, higher shards will help with the new indexes but not the existing >> ones. >> You can use the API to force ES to move shards off your ‘full’ disk over >> to the new one. >> Auto-balancing for data size should be on by default if you are running a >> newer version of ES. >> >> On Dec 25, 2014, at 8:13 PM, Michael deMan (ES) >> wrote: >> >> Try increasing the number of shards - maybe to 20 or 40. >> >> On Dec 25, 2014, at 8:10 PM, Xiaoliang Tian >> wrote: >> >> index number depend on how many days past,the index name is the day >> epoch. because we use elasticseach for log storage >> shard number is 8 >> replica is 1 >> >> 2014-12-25 15:49 GMT+08:00 David Pilato : >> >>> How many index/shards/replicas do you have? >>> >>> -- >>> David ;-) >>> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs >>> >>> Le 25 déc. 2014 à 08:28, xiaoliang tian a >>> écrit : >>> >>> I also start a new data node(node 5),and there is new data come into >>> cluster continuously,but There is no any data in data node 5.I don't know >>> why,plz help >>> >>> 在 2014年12月25日星期四UTC+8上午10时33分29秒,xiaoliang tian写道: >>>> >>>> Hi,I got 4 data nodes,1 master node and 1 search node >>>> >>>> >>>> For example,At first,the data was equally saved in 4 data nodes >>>> node1 1.6TB >>>> node2 1.6TB >>>> node3 1.6TB >>>> node4 1.6TB >>>> >>>> Since My disk is 2TB and it is almost full,I delete some index to get >>>> more storage >>>> >>>> after deleting the data nodes are like below >>>> >>>> node1 1TB >>>> node2 1TB >>>> node3 1TB >>>> node4 1TB >>>> >>>> after a few days, I found the data is not equally saved in each node >>>> anymore >>>> >>>> node1 1.1TB >>>> node2 1.1TB >>>> node3 1.1TB >>>> node4 1.6TB >>>> >>>> node 4 is almost full >>>> I don't know why,and is there anyway to rebalance data in each data >>>> node >>>> >>>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "elasticsearch" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to elasticsearch+unsubscr...@googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com >>> <https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com?utm_medium=email&utm_source=footer> >>> . >>> For more options, visit https://groups.google.com/d/optout. >>> >>> >>> -- >>> You received this message because you are subscribed to a topic in the >>> Google Groups "elasticsearch" group. >>> To unsubscribe from this topic, visit >>> https://groups.google.com/d/topic/elasticsearch/ILjo-_VRQxA/unsubscribe. >>> To unsubscribe from this group and all its topics, send an email to >>> elasticsearch+unsubscr...@googlegroups.com. >>> To view this discussion on the web visit >>> https://groups.google.com/d/msgid/elasticsearch/129A7708-03D8-4F29-87BC-E6AC72369602%40pilato.fr >>> <https://groups.google.com/d/msgid/elasticsearch/129A7708-03D8-4F29-87BC-E6AC72369602%40pilato.fr?utm_medium=email&utm_source=footer> >>> . >>> >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> >> -- >> You received this message because you are subscribed to the Google Groups >> "elasticsearch" group. >> To unsubscribe from this group and stop receiving e
Re: Data is not saved equally in each datanode
which version? and what is the API url exactly 2014-12-26 12:44 GMT+08:00 Michael deMan (ES) : > Also, higher shards will help with the new indexes but not the existing > ones. > You can use the API to force ES to move shards off your ‘full’ disk over > to the new one. > Auto-balancing for data size should be on by default if you are running a > newer version of ES. > > On Dec 25, 2014, at 8:13 PM, Michael deMan (ES) > wrote: > > Try increasing the number of shards - maybe to 20 or 40. > > On Dec 25, 2014, at 8:10 PM, Xiaoliang Tian > wrote: > > index number depend on how many days past,the index name is the day > epoch. because we use elasticseach for log storage > shard number is 8 > replica is 1 > > 2014-12-25 15:49 GMT+08:00 David Pilato : > >> How many index/shards/replicas do you have? >> >> -- >> David ;-) >> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs >> >> Le 25 déc. 2014 à 08:28, xiaoliang tian a >> écrit : >> >> I also start a new data node(node 5),and there is new data come into >> cluster continuously,but There is no any data in data node 5.I don't know >> why,plz help >> >> 在 2014年12月25日星期四UTC+8上午10时33分29秒,xiaoliang tian写道: >>> >>> Hi,I got 4 data nodes,1 master node and 1 search node >>> >>> >>> For example,At first,the data was equally saved in 4 data nodes >>> node1 1.6TB >>> node2 1.6TB >>> node3 1.6TB >>> node4 1.6TB >>> >>> Since My disk is 2TB and it is almost full,I delete some index to get >>> more storage >>> >>> after deleting the data nodes are like below >>> >>> node1 1TB >>> node2 1TB >>> node3 1TB >>> node4 1TB >>> >>> after a few days, I found the data is not equally saved in each node >>> anymore >>> >>> node1 1.1TB >>> node2 1.1TB >>> node3 1.1TB >>> node4 1.6TB >>> >>> node 4 is almost full >>> I don't know why,and is there anyway to rebalance data in each data node >>> >>> >> -- >> You received this message because you are subscribed to the Google Groups >> "elasticsearch" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to elasticsearch+unsubscr...@googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com >> <https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com?utm_medium=email&utm_source=footer> >> . >> For more options, visit https://groups.google.com/d/optout. >> >> >> -- >> You received this message because you are subscribed to a topic in the >> Google Groups "elasticsearch" group. >> To unsubscribe from this topic, visit >> https://groups.google.com/d/topic/elasticsearch/ILjo-_VRQxA/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> elasticsearch+unsubscr...@googlegroups.com. >> To view this discussion on the web visit >> https://groups.google.com/d/msgid/elasticsearch/129A7708-03D8-4F29-87BC-E6AC72369602%40pilato.fr >> <https://groups.google.com/d/msgid/elasticsearch/129A7708-03D8-4F29-87BC-E6AC72369602%40pilato.fr?utm_medium=email&utm_source=footer> >> . >> >> For more options, visit https://groups.google.com/d/optout. >> > > > -- > You received this message because you are subscribed to the Google Groups > "elasticsearch" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/CAJ%3DxLsWgSxiVHmFCGw_jKWAKEpfiGs2ROxYVVhBjSFsqSSKukg%40mail.gmail.com > <https://groups.google.com/d/msgid/elasticsearch/CAJ%3DxLsWgSxiVHmFCGw_jKWAKEpfiGs2ROxYVVhBjSFsqSSKukg%40mail.gmail.com?utm_medium=email&utm_source=footer> > . > For more options, visit https://groups.google.com/d/optout. > > > > -- > You received this message because you are subscribed to the Google Groups > "elasticsearch" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/D207CFFB-2B24-4FBA-89A4-CCCF2A100BEB%40deman.com > <https://groups.google.com/d/msgid/elasticsearch/D207CFFB-2
Re: Data is not saved equally in each datanode
index number depend on how many days past,the index name is the day epoch. because we use elasticseach for log storage shard number is 8 replica is 1 2014-12-25 15:49 GMT+08:00 David Pilato : > How many index/shards/replicas do you have? > > -- > David ;-) > Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs > > Le 25 déc. 2014 à 08:28, xiaoliang tian a > écrit : > > I also start a new data node(node 5),and there is new data come into > cluster continuously,but There is no any data in data node 5.I don't know > why,plz help > > 在 2014年12月25日星期四UTC+8上午10时33分29秒,xiaoliang tian写道: >> >> Hi,I got 4 data nodes,1 master node and 1 search node >> >> >> For example,At first,the data was equally saved in 4 data nodes >> node1 1.6TB >> node2 1.6TB >> node3 1.6TB >> node4 1.6TB >> >> Since My disk is 2TB and it is almost full,I delete some index to get >> more storage >> >> after deleting the data nodes are like below >> >> node1 1TB >> node2 1TB >> node3 1TB >> node4 1TB >> >> after a few days, I found the data is not equally saved in each node >> anymore >> >> node1 1.1TB >> node2 1.1TB >> node3 1.1TB >> node4 1.6TB >> >> node 4 is almost full >> I don't know why,and is there anyway to rebalance data in each data node >> >> -- > You received this message because you are subscribed to the Google Groups > "elasticsearch" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com > <https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com?utm_medium=email&utm_source=footer> > . > For more options, visit https://groups.google.com/d/optout. > > -- > You received this message because you are subscribed to a topic in the > Google Groups "elasticsearch" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/elasticsearch/ILjo-_VRQxA/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/129A7708-03D8-4F29-87BC-E6AC72369602%40pilato.fr > <https://groups.google.com/d/msgid/elasticsearch/129A7708-03D8-4F29-87BC-E6AC72369602%40pilato.fr?utm_medium=email&utm_source=footer> > . > > For more options, visit https://groups.google.com/d/optout. > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJ%3DxLsWgSxiVHmFCGw_jKWAKEpfiGs2ROxYVVhBjSFsqSSKukg%40mail.gmail.com. For more options, visit https://groups.google.com/d/optout.
Re: Data is not saved equally in each datanode
I also start a new data node(node 5),and there is new data come into cluster continuously,but There is no any data in data node 5.I don't know why,plz help 在 2014年12月25日星期四UTC+8上午10时33分29秒,xiaoliang tian写道: > > Hi,I got 4 data nodes,1 master node and 1 search node > > > For example,At first,the data was equally saved in 4 data nodes > node1 1.6TB > node2 1.6TB > node3 1.6TB > node4 1.6TB > > Since My disk is 2TB and it is almost full,I delete some index to get > more storage > > after deleting the data nodes are like below > > node1 1TB > node2 1TB > node3 1TB > node4 1TB > > after a few days, I found the data is not equally saved in each node > anymore > > node1 1.1TB > node2 1.1TB > node3 1.1TB > node4 1.6TB > > node 4 is almost full > I don't know why,and is there anyway to rebalance data in each data node > > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9766ba6d-f7b1-4071-8f4b-b2b5c6a14085%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Data is not saved equally in each datanode
Hi,I got 4 data nodes,1 master node and 1 search node For example,At first,the data was equally saved in 4 data nodes node1 1.6TB node2 1.6TB node3 1.6TB node4 1.6TB Since My disk is 2TB and it is almost full,I delete some index to get more storage after deleting the data nodes are like below node1 1TB node2 1TB node3 1TB node4 1TB after a few days, I found the data is not equally saved in each node anymore node1 1.1TB node2 1.1TB node3 1.1TB node4 1.6TB node 4 is almost full I don't know why,and is there anyway to rebalance data in each data node -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d1405623-9bde-4a51-8818-3f077535aef5%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.
Re: how to resolve elasticsearch status red
this procedure is really great and helps a lot,thanks! 2013/12/17 Jon Attree > Had a similar issue. I managed to recover all my unassigned shards and get > the cluster back to a green status using the following procedure. It could > probably be streamlined but it works. > > 1. Dump list of unassigned shards to text file. Using the following > command > > curl -XGET http://localhost:9200/_cluster/state?pretty=true >> > /tmp/unassign.txt > > 2. VI unassign.txt > > 3. Search the file for UNASSIGNED > > 4. Running the following command on each unassigned shard > > curl XPOST -s 'http://localhost:9200/_cluster/reroute?pretty=true' -d '{ > "commands" : [ { "allocate" : { > > "index" : "*INDEX TO WISH TO REASSIGN*", "shard" : 0 , > "node" : "*NODE YOU WISH TO ASSIGN IT TO*", "allow_primary" : 1 } > > } > > ] > > }’ > If the shard is a number other than 0 change it to reflex that as well. > > Good luck, > > Jon > > > On Monday, December 16, 2013 2:48:34 AM UTC-8, xiaoliang tian wrote: >> >> I have one master 1 data and 1 search node,Then are running on AWS >> >> During indexing ,cluster status became from green to red,and there is >> part of unassigned shards,I terminated the only data node by mistake,and >> start a new VM and let it join into cluster,It turns out status is still >> red and All of shards is unassigned. >> >> 1.is there any way to recover the status to green? >> 2.is there any chance I can recovered the data node's data?because I >> terminated the only data node >> 3.And if I met this situation again,I mean "During indexing ,cluster >> status became from green to red,and there is part of unassigned shards".How >> can I recover status to green >> >> >> -- > You received this message because you are subscribed to a topic in the > Google Groups "elasticsearch" group. > To unsubscribe from this topic, visit > https://groups.google.com/d/topic/elasticsearch/wvYXXwLgx_o/unsubscribe. > To unsubscribe from this group and all its topics, send an email to > elasticsearch+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/elasticsearch/acfe086c-32d0-47c9-aa2d-39f2ac58986d%40googlegroups.com > . > > For more options, visit https://groups.google.com/groups/opt_out. > -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAJ%3DxLsX9jBYC2JSQFEjicO2tyMAPSun024ho3JiD064Ny%2BOTFQ%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.