Re: Elasticsearch Issue with custom json input data using logstash

2015-02-18 Thread jlam
I'm using logstash.

On the client, I setup the input for logstash with json codec and output to 
redis server.  There is a logstash instance to pop from redis list into ES.

Thanks,
Jared

On Wednesday, February 18, 2015 at 5:14:15 PM UTC-8, Mark Walkom wrote:
>
> How are you feeding the json logs into ES?
>
> On 19 February 2015 at 10:56, > wrote:
>
>> Also the CPU usage also jump considerably too.
>>
>> Thanks,
>> Jared
>>
>>
>> On Wednesday, February 18, 2015 at 3:50:11 PM UTC-8, jl...@bills.com 
>> wrote:
>>>
>>> Hello Everyone,
>>>
>>> I'm hoping I might get some help on how Elasticsearch.  I'm getting 
>>> performance issues with Elasticsearch.
>>>
>>> With our current setup:
>>> We have Elasticsearch (1.4.3), redis, and logstash installed on the same 
>>> server with 30GB of memory.  The ES_HEAP_SIZE is set to 15GB.  Each server 
>>> has logstash installed and push the logs to redis.  The logstash on the 
>>> server will pickup logs from redis and push them to Elasticsearch.
>>>
>>> We are logging apache logs on all the web servers without any 
>>> performance issues.  Kibana works fine and performance is pretty fast.
>>>
>>> Here is the issue:
>>> We want to do custom application logging.  The logs are in json format.  
>>> When Elasticsearch getting 
>>> "org.elasticsearch.index.mapper.MapperParsingException: 
>>> failed to parse" exceptions, the performance really degrades and become 
>>> unusable.  The redis will consume more and more memory.  Elasticsearch will 
>>> come to a point where it is doing GC.  Restarting Elasticsearch doesn't 
>>> help.
>>>
>>> The dataset is not that big comparing to others.  The daily size of the 
>>> dataset is probably 2GB to 3GB of logs.
>>>
>>> It seems that if Elasticsearch is having problem execute bulk item 
>>> index, it degrades the performance considerably.
>>>
>>> I'm wondering if there are any recommendation on Elasticsearch and 
>>> logstash configuration.
>>>
>>> Do I need to alter the logstash mapping?
>>>
>>> Thanks,
>>> Jared
>>>
>>>
>>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/eaf86e7f-874c-4eef-92ae-747c305576c2%40googlegroups.com
>>  
>> 
>> .
>>
>> For more options, visit https://groups.google.com/d/optout.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/608d2cb9-2e54-4d47-bf92-d50ca85ca2e4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch Issue with custom json input data using logstash

2015-02-18 Thread Mark Walkom
How are you feeding the json logs into ES?

On 19 February 2015 at 10:56,  wrote:

> Also the CPU usage also jump considerably too.
>
> Thanks,
> Jared
>
>
> On Wednesday, February 18, 2015 at 3:50:11 PM UTC-8, jl...@bills.com
> wrote:
>>
>> Hello Everyone,
>>
>> I'm hoping I might get some help on how Elasticsearch.  I'm getting
>> performance issues with Elasticsearch.
>>
>> With our current setup:
>> We have Elasticsearch (1.4.3), redis, and logstash installed on the same
>> server with 30GB of memory.  The ES_HEAP_SIZE is set to 15GB.  Each server
>> has logstash installed and push the logs to redis.  The logstash on the
>> server will pickup logs from redis and push them to Elasticsearch.
>>
>> We are logging apache logs on all the web servers without any performance
>> issues.  Kibana works fine and performance is pretty fast.
>>
>> Here is the issue:
>> We want to do custom application logging.  The logs are in json format.
>> When Elasticsearch getting 
>> "org.elasticsearch.index.mapper.MapperParsingException:
>> failed to parse" exceptions, the performance really degrades and become
>> unusable.  The redis will consume more and more memory.  Elasticsearch will
>> come to a point where it is doing GC.  Restarting Elasticsearch doesn't
>> help.
>>
>> The dataset is not that big comparing to others.  The daily size of the
>> dataset is probably 2GB to 3GB of logs.
>>
>> It seems that if Elasticsearch is having problem execute bulk item index,
>> it degrades the performance considerably.
>>
>> I'm wondering if there are any recommendation on Elasticsearch and
>> logstash configuration.
>>
>> Do I need to alter the logstash mapping?
>>
>> Thanks,
>> Jared
>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/eaf86e7f-874c-4eef-92ae-747c305576c2%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEYi1X_nJqMnZSGFnw3h1LyEnxhX-17mwL7PoV1RHunhiymv5A%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch Issue with custom json input data using logstash

2015-02-18 Thread jlam
Also the CPU usage also jump considerably too.

Thanks,
Jared

On Wednesday, February 18, 2015 at 3:50:11 PM UTC-8, jl...@bills.com wrote:
>
> Hello Everyone,
>
> I'm hoping I might get some help on how Elasticsearch.  I'm getting 
> performance issues with Elasticsearch.
>
> With our current setup:
> We have Elasticsearch (1.4.3), redis, and logstash installed on the same 
> server with 30GB of memory.  The ES_HEAP_SIZE is set to 15GB.  Each server 
> has logstash installed and push the logs to redis.  The logstash on the 
> server will pickup logs from redis and push them to Elasticsearch.
>
> We are logging apache logs on all the web servers without any performance 
> issues.  Kibana works fine and performance is pretty fast.
>
> Here is the issue:
> We want to do custom application logging.  The logs are in json format. 
>  When Elasticsearch getting 
> "org.elasticsearch.index.mapper.MapperParsingException: failed to parse" 
> exceptions, the performance really degrades and become unusable.  The redis 
> will consume more and more memory.  Elasticsearch will come to a point 
> where it is doing GC.  Restarting Elasticsearch doesn't help.
>
> The dataset is not that big comparing to others.  The daily size of the 
> dataset is probably 2GB to 3GB of logs.
>
> It seems that if Elasticsearch is having problem execute bulk item index, 
> it degrades the performance considerably.
>
> I'm wondering if there are any recommendation on Elasticsearch and 
> logstash configuration.
>
> Do I need to alter the logstash mapping?
>
> Thanks,
> Jared
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/eaf86e7f-874c-4eef-92ae-747c305576c2%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Elasticsearch issue.

2014-02-11 Thread Binh Ly
Forgot to mention, Marvel only works with ES 0.90.9 and later. Just FYI.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/be835235-7f82-489d-8caf-9ba7aa933706%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Binh Ly
Chris, you're right, I'm doing it on a newer version. For your case, try:

curl "localhost:9200/_cluster/state?pretty"

You'll get a lot more info but just look under the routing_table and 
routing_nodes sections for the details I mentioned before.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3da8ea74-9e41-4a01-b3f6-86f60d13bcfd%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Chris
Hi Binh,

That command did not seem to work. I am running version 90.6, is that 
supported in this version?

$ curl http://server:9200/_cluster/state/routing_table?pretty
{
  "error" : "IndexMissingException[[_cluster] missing]",
  "status" : 404
}

Thanks,

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/1bd73b1f-12c3-4d24-a1ae-da70328f7719%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Chris
I am using bigdesk and marvel that I just installed today, I am running 
version 90.6 of elasticsearch and I am not getting data back from marvel. I 
want to upgrade to most recent version however, I want to resolve this 
issue first. 

Do you know how to assign primary shards?

Thanks,

Chris.


On Tuesday, February 11, 2014 2:15:43 PM UTC-8, Mark Walkom wrote:
>
> Not if your cluster is in a red state, that means you have unassigned 
> primary shards.
>
> What are you using to monitor things? If you're only using the API then 
> look at plugins like elastichq, kopf, bigdesk or marvel. They will give you 
> better insight into what is happening.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>
>
> On 12 February 2014 09:11, Chris > wrote:
>
>> Thanks for the feedback Mark / Binh,
>>
>> I am not sure if it is a single node that is causing the problem. 
>> Querying _cluster/health/indexdata?level=shards gives me this response 
>> below. Is deleting the data from the bad node, consistent when the shards 
>> are in the state as below? 
>>
>> {
>>"cluster_name": "Cluster",
>>"status": "red",
>>"timed_out": false,
>>"number_of_nodes": 6,
>>"number_of_data_nodes": 6,
>>"active_primary_shards": 2,
>>"active_shards": 3,
>>"relocating_shards": 0,
>>"initializing_shards": 4,
>>"unassigned_shards": 3,
>>"indices": {
>>   "indexdata": {
>>  "status": "red",
>>  "number_of_shards": 5,
>>  "number_of_replicas": 1,
>>  "active_primary_shards": 2,
>>  "active_shards": 3,
>>  "relocating_shards": 0,
>>  "initializing_shards": 4,
>>  "unassigned_shards": 3,
>>  "shards": {
>> "0": {
>>"status": "yellow",
>>"primary_active": true,
>>"active_shards": 1,
>>"relocating_shards": 0,
>>"initializing_shards": 1,
>>"unassigned_shards": 0
>> },
>> "1": {
>>"status": "red",
>>"primary_active": false,
>>"active_shards": 0,
>>"relocating_shards": 0,
>>"initializing_shards": 1,
>>"unassigned_shards": 1
>> },
>> "2": {
>>"status": "green",
>>"primary_active": true,
>>"active_shards": 2,
>>"relocating_shards": 0,
>>"initializing_shards": 0,
>>"unassigned_shards": 0
>> },
>> "3": {
>>"status": "red",
>>"primary_active": false,
>>"active_shards": 0,
>>"relocating_shards": 0,
>>"initializing_shards": 1,
>>"unassigned_shards": 1
>> },
>> "4": {
>>"status": "red",
>>"primary_active": false,
>>"active_shards": 0,
>>"relocating_shards": 0,
>>"initializing_shards": 1,
>>"unassigned_shards": 1
>> }
>>  }
>>   }
>>}
>> }
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/456f0248-c094-4cde-88d9-5f771fc5a654%40googlegroups.com
>> .
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/3db3b310-2781-41de-929e-d68278a8a86e%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Binh Ly
Chris,

You'll probably need to find out which node contains whichever shards that 
you think are bad. If you do something like this, you can get a detailed 
breakdown of which indexes has which shards on which nodes and their 
corresponding shard states:

curl "localhost:9200/_cluster/state/routing_table?pretty"

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/73a94271-69a8-43fd-80b0-b3b39b9cb0a7%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Mark Walkom
Not if your cluster is in a red state, that means you have unassigned
primary shards.

What are you using to monitor things? If you're only using the API then
look at plugins like elastichq, kopf, bigdesk or marvel. They will give you
better insight into what is happening.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 12 February 2014 09:11, Chris  wrote:

> Thanks for the feedback Mark / Binh,
>
> I am not sure if it is a single node that is causing the problem. Querying
> _cluster/health/indexdata?level=shards gives me this response below. Is
> deleting the data from the bad node, consistent when the shards are in the
> state as below?
>
> {
>"cluster_name": "Cluster",
>"status": "red",
>"timed_out": false,
>"number_of_nodes": 6,
>"number_of_data_nodes": 6,
>"active_primary_shards": 2,
>"active_shards": 3,
>"relocating_shards": 0,
>"initializing_shards": 4,
>"unassigned_shards": 3,
>"indices": {
>   "indexdata": {
>  "status": "red",
>  "number_of_shards": 5,
>  "number_of_replicas": 1,
>  "active_primary_shards": 2,
>  "active_shards": 3,
>  "relocating_shards": 0,
>  "initializing_shards": 4,
>  "unassigned_shards": 3,
>  "shards": {
> "0": {
>"status": "yellow",
>"primary_active": true,
>"active_shards": 1,
>"relocating_shards": 0,
>"initializing_shards": 1,
>"unassigned_shards": 0
> },
> "1": {
>"status": "red",
>"primary_active": false,
>"active_shards": 0,
>"relocating_shards": 0,
>"initializing_shards": 1,
>"unassigned_shards": 1
> },
> "2": {
>"status": "green",
>"primary_active": true,
>"active_shards": 2,
>"relocating_shards": 0,
>"initializing_shards": 0,
>"unassigned_shards": 0
> },
> "3": {
>"status": "red",
>"primary_active": false,
>"active_shards": 0,
>"relocating_shards": 0,
>"initializing_shards": 1,
>"unassigned_shards": 1
> },
> "4": {
>"status": "red",
>"primary_active": false,
>"active_shards": 0,
>"relocating_shards": 0,
>"initializing_shards": 1,
>"unassigned_shards": 1
> }
>  }
>   }
>}
> }
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/456f0248-c094-4cde-88d9-5f771fc5a654%40googlegroups.com
> .
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624Z%2BKFqSg0UM5rZDgG%2BgM5ZBEqCaG3KfzJPp%2BSfnk6j%3Dow%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Chris
Thanks for the feedback Mark / Binh,

I am not sure if it is a single node that is causing the problem. Querying 
_cluster/health/indexdata?level=shards gives me this response below. Is 
deleting the data from the bad node, consistent when the shards are in the 
state as below? 

{
   "cluster_name": "Cluster",
   "status": "red",
   "timed_out": false,
   "number_of_nodes": 6,
   "number_of_data_nodes": 6,
   "active_primary_shards": 2,
   "active_shards": 3,
   "relocating_shards": 0,
   "initializing_shards": 4,
   "unassigned_shards": 3,
   "indices": {
  "indexdata": {
 "status": "red",
 "number_of_shards": 5,
 "number_of_replicas": 1,
 "active_primary_shards": 2,
 "active_shards": 3,
 "relocating_shards": 0,
 "initializing_shards": 4,
 "unassigned_shards": 3,
 "shards": {
"0": {
   "status": "yellow",
   "primary_active": true,
   "active_shards": 1,
   "relocating_shards": 0,
   "initializing_shards": 1,
   "unassigned_shards": 0
},
"1": {
   "status": "red",
   "primary_active": false,
   "active_shards": 0,
   "relocating_shards": 0,
   "initializing_shards": 1,
   "unassigned_shards": 1
},
"2": {
   "status": "green",
   "primary_active": true,
   "active_shards": 2,
   "relocating_shards": 0,
   "initializing_shards": 0,
   "unassigned_shards": 0
},
"3": {
   "status": "red",
   "primary_active": false,
   "active_shards": 0,
   "relocating_shards": 0,
   "initializing_shards": 1,
   "unassigned_shards": 1
},
"4": {
   "status": "red",
   "primary_active": false,
   "active_shards": 0,
   "relocating_shards": 0,
   "initializing_shards": 1,
   "unassigned_shards": 1
}
 }
  }
   }
}

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/456f0248-c094-4cde-88d9-5f771fc5a654%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Mark Walkom
The bad node is the one that ran out of space.
If you have installed ES on linux using a package (deb/rpm) then the data
is usually under /var/lib/elasticsearch. Just manually delete it and then
rejoin the node.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com


On 12 February 2014 08:09, Chris  wrote:

>
>
> On Tuesday, February 11, 2014 12:44:02 PM UTC-8, Binh Ly wrote:
>
>> If all your other nodes contain enough replicas of all your indexes (i.e.
>> you have lost no data), then you can safely take down the bad node, wipe
>> out whatever data is in the data directory (assuming it is local to the
>> node) and then join it back to the cluster. If the bad node actually
>> contained some primary shards with no replicas, then you're probably out of
>> luck and just need to delete the specific index that contains those
>> replicas (i.e. the index(es) that has unassigned shards that were on that
>> bad node) and rebuild your index.
>>
>
> Binh,
>
> Thank so much for your input. How does one determine which node is bad,
> and what is the process to delete the specific index / rebuild?
>
> --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/bed4ed72-e6ea-4b1b-8937-fe06721158d9%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAEM624Y1vf0MUnU4h_OE9a87MWx7exak%2BEzK0juTjiTObhekOA%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Chris


On Tuesday, February 11, 2014 12:44:02 PM UTC-8, Binh Ly wrote:
>
> If all your other nodes contain enough replicas of all your indexes (i.e. 
> you have lost no data), then you can safely take down the bad node, wipe 
> out whatever data is in the data directory (assuming it is local to the 
> node) and then join it back to the cluster. If the bad node actually 
> contained some primary shards with no replicas, then you're probably out of 
> luck and just need to delete the specific index that contains those 
> replicas (i.e. the index(es) that has unassigned shards that were on that 
> bad node) and rebuild your index.
>

Binh, 

Thank so much for your input. How does one determine which node is bad, and 
what is the process to delete the specific index / rebuild? 

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bed4ed72-e6ea-4b1b-8937-fe06721158d9%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Chris


On Tuesday, February 11, 2014 12:44:02 PM UTC-8, Binh Ly wrote:
>
> If all your other nodes contain enough replicas of all your indexes (i.e. 
> you have lost no data), then you can safely take down the bad node, wipe 
> out whatever data is in the data directory (assuming it is local to the 
> node) and then join it back to the cluster. If the bad node actually 
> contained some primary shards with no replicas, then you're probably out of 
> luck and just need to delete the specific index that contains those 
> replicas (i.e. the index(es) that has unassigned shards that were on that 
> bad node) and rebuild your index.
>

Binh, 

Thank so much for your input. Wow would I  determine which node is bad, and 
what is the process to delete the specific index / rebuild?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/bbe0b2ff-6ec4-43d3-b8ae-2b5e012aa0dd%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Elasticsearch issue.

2014-02-11 Thread Binh Ly
If all your other nodes contain enough replicas of all your indexes (i.e. 
you have lost no data), then you can safely take down the bad node, wipe 
out whatever data is in the data directory (assuming it is local to the 
node) and then join it back to the cluster. If the bad node actually 
contained some primary shards with no replicas, then you're probably out of 
luck and just need to delete the specific index that contains those 
replicas (i.e. the index(es) that has unassigned shards that were on that 
bad node) and rebuild your index.

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/f02bd2f7-97e7-42ee-b0e2-7336df3090e9%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.