Elasticsearch restart deleted all indices

2014-05-30 Thread Abhishek Tiwari
1. When i restarted a node yesterday, this happened (logs follow)-
2. i lost 96 days worth of my logs. df before and after the restart 
indicated that i had lost ~260G of indices post install.
3. After this, the node started syncing logs from the very beginning from 
the rest of the cluster.
4. Specs- Elasticsearch 1.1.1 on Amazon aws. i use aws-cloud plugin for 
node discovery.

What could have caused this?

[2014-05-29 13:35:37,824][INFO ][node ] [Pyro] 
version[1.1.1], pid[4005], build[f1585f0/2014-04-16T14:27:12Z]
[2014-05-29 13:35:37,824][INFO ][node ] [Pyro] 
initializing ...
[2014-05-29 13:35:37,851][INFO ][plugins  ] [Pyro] loaded 
[river-jdbc, cloud-aws], sites [whatson, head, bigdesk, paramedic]
[2014-05-29 13:35:41,166][INFO ][node ] [Pyro] 
initialized
[2014-05-29 13:35:41,167][INFO ][node ] [Pyro] starting 
...
[2014-05-29 13:35:41,306][INFO ][transport] [Pyro] 
bound_address {inet[/0:0:0:0:0:0:0:0:9300]}, publish_address 
{inet[/10.0.0.135:9300]}
[2014-05-29 13:35:45,802][INFO ][cluster.service  ] [Pyro] 
new_master 
[Pyro][sWs8aTYHSx-j-ywO45vQ1Q][ip-10-0-0-135][inet[/10.0.0.135:9300]], 
reason: zen-disco-join (elected_as_master)
[2014-05-29 13:35:45,826][INFO ][discovery] [Pyro] 
jarvis/sWs8aTYHSx-j-ywO45vQ1Q
[2014-05-29 13:35:46,019][INFO ][http ] [Pyro] 
bound_address {inet[/0:0:0:0:0:0:0:0:9200]}, publish_address 
{inet[/10.0.0.135:9200]}
[2014-05-29 13:35:47,328][INFO ][gateway  ] [Pyro] *recovered 
[96] indices into cluster_state*
[2014-05-29 13:35:47,335][INFO ][node ] [Pyro] started
[2014-05-29 13:36:03,102][INFO ][node ] [Pyro] stopping 
...
[2014-05-29 13:36:03,401][WARN ][cluster.service  ] [Pyro] failed 
to apply updated cluster state:
version [109], source [shard-started ([logstash-2014.02.27][4], 
node[sWs8aTYHSx-j-ywO45vQ1Q], [P], s[INITIALIZING]), reason [after recovery 
from gateway]]
nodes: 
   [Pyro][sWs8aTYHSx-j-ywO45vQ1Q][ip-10-0-0-135][inet[/10.0.0.135:9300]], 
local, master
routing_table:
-- index [logstash-2014.03.22]
shard_id [logstash-2014.03.22][4]
[logstash-2014.03.22][4], node[sWs8aTYHSx-j-ywO45vQ1Q], [P], 
s[STARTED]
[logstash-2014.03.22][4], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.22][0]
[logstash-2014.03.22][0], node[null], [P], s[UNASSIGNED]
[logstash-2014.03.22][0], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.22][3]
[logstash-2014.03.22][3], node[null], [P], s[UNASSIGNED]
[logstash-2014.03.22][3], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.22][1]
[logstash-2014.03.22][1], node[sWs8aTYHSx-j-ywO45vQ1Q], [P], 
s[STARTED]
[logstash-2014.03.22][1], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.22][2]
[logstash-2014.03.22][2], node[sWs8aTYHSx-j-ywO45vQ1Q], [P], 
s[STARTED]
[logstash-2014.03.22][2], node[null], [R], s[UNASSIGNED]

-- index [logstash-2014.03.21]
shard_id [logstash-2014.03.21][2]
[logstash-2014.03.21][2], node[sWs8aTYHSx-j-ywO45vQ1Q], [P], 
s[STARTED]
[logstash-2014.03.21][2], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.21][0]
[logstash-2014.03.21][0], node[null], [P], s[UNASSIGNED]
[logstash-2014.03.21][0], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.21][3]
[logstash-2014.03.21][3], node[sWs8aTYHSx-j-ywO45vQ1Q], [P], 
s[STARTED]
[logstash-2014.03.21][3], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.21][1]
[logstash-2014.03.21][1], node[null], [P], s[UNASSIGNED]
[logstash-2014.03.21][1], node[null], [R], s[UNASSIGNED]
shard_id [logstash-2014.03.21][4]
[logstash-2014.03.21][4], node[sWs8aTYHSx-j-ywO45vQ1Q], [P], 
s[STARTED]
[logstash-2014.03.21][4], node[null], [R], s[UNASSIGNED]

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/66b1c0f6-400c-4d57-82e9-c1704cc5ade4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Lot of GC in elasticsearch node.

2014-05-12 Thread Abhishek Tiwari

>
> How much RAM you need depends on how long you want to keep your data 
> around for. So, given you have ~200GB now on 4GB of RAM, you can probably 
> extrapolate that out based on your needs.


Isn't my problem more with 9G *daily* index, than with total of 200G(20 
days x 9G) indexes?
Correct me if i am wrong here but doesn't kibana ask elasticsearch for just 
one day/week of indices(based on the query).
Will elasticsearch really care if i have 500 days of total day-wise 
segregated indices out there but am performing queries on *just past 7 days*? 
 
Is this a total-footprint problem or a daily throughput problem?



On Monday, 12 May 2014 15:30:36 UTC+5:30, Mark Walkom wrote:
>
> It's standard practise to use 50% of system memory for the heap.
>
> How much RAM you need depends on how long you want to keep your data 
> around for. So, given you have ~200GB now on 4GB of RAM, you can probably 
> extrapolate that out based on your needs.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>
>
> On 12 May 2014 19:33, Abhishek Tiwari >wrote:
>
>> add more memory
>>
>>
>> i am doing 15 million docs, which total to ~9G. The average doc size is 
>> ~2KB.
>>
>>  1. How much memory would you suggest for my use-case?
>> 2.Also, is it prudent for me to have half of OS memory dedicated to 
>> elasticsearch?
>>
>>
>> On Monday, 12 May 2014 14:03:19 UTC+5:30, Mark Walkom wrote:
>>>
>>> You need to reduce your data size, add more memory or add another node.
>>>
>>> Basically, you've reached the limits of that node.
>>>
>>> Regards,
>>> Mark Walkom
>>>
>>> Infrastructure Engineer
>>> Campaign Monitor
>>> email: ma...@campaignmonitor.com
>>> web: www.campaignmonitor.com
>>>  
>>>
>>> On 12 May 2014 16:38, Abhishek Tiwari  wrote:
>>>
>>>> My elasticsearch node is a AWS EC2 c3.xlarge (7.5G mem). 
>>>> Elasticsearch starts as-
>>>>
>>>> 498  31810 99.6 64.6 163846656 4976944 ?   Sl   06:03  26:10 
>>>> /usr/bin/java *-Xms4g -Xmx4g -Xss256k* -Djava.awt.headless=true 
>>>> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:
>>>> CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
>>>> -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch -Des.pidfile=/var/run/
>>>> elasticsearch/elasticsearch.pid -Des.path.home=/usr/share/elasticsearch 
>>>> -cp :/usr/share/elasticsearch/lib/elasticsearch-1.1.1.jar:/usr/
>>>> share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/* 
>>>> -Des.default.path.home=/usr/share/elasticsearch 
>>>> -Des.default.path.logs=/var/log/elasticsearch 
>>>> -Des.default.path.data=/var/lib/elasticsearch 
>>>> -Des.default.path.work=/tmp/elasticsearch 
>>>> -Des.default.path.conf=/etc/elasticsearch 
>>>> org.elasticsearch.bootstrap.Elasticsearch
>>>>
>>>>
>>>> The node stopped responding (the ip:9200 status page), and so did 
>>>> kibana. It started working fine on a restart.
>>>> i have logstash format docs wherein the index rotates daily. 
>>>> Stats:
>>>>   Daily: ~11G docs, ~15  million.
>>>>   Total: 195G docs, ~300 million.
>>>>
>>>> The logs of the time when it stopped responding are-
>>>>
>>>> [2014-05-12 03:39:08,789][INFO ][cluster.metadata ] [Hannibal 
>>>> King] [logstash-2014.05.12] update_mapping [medusa_ex] (dynamic)
>>>> [2014-05-12 03:40:52,293][INFO ][monitor.jvm  ] [Hannibal 
>>>> King] [gc][old][240428][35773] duration [6.3s], collections [1]/[6.5s], 
>>>> total [6.3s]/[4.7h], memory [3.8gb]->[3.6gb]/[3.9gb], all_pools {[young] 
>>>> [150.3mb]->[1.7mb]/[266.2mb]}{[survivor] 
>>>> [33.2mb]->[0b]/[33.2mb]}{[old] [3.6gb]->[3.6gb]/[3.6gb]}
>>>> [2014-05-12 03:44:11,739][INFO ][cluster.metadata ] [Hannibal 
>>>> King] [logstash-2014.05.12] update_mapping [medusa_ex] (dynamic)
>>>> [2014-05-12 03:45:32,191][INFO ][monitor.jvm  ] [Hannibal 
>>>> King] [gc][old][240703][35812] duration [5.2s], collections [1]/[5.8s], 
>>>> total [5.2s]/[4.7h], memory [3.7gb]->[3.6gb]/[3.9gb], all_pools {[young] 
>>>> [197.4mb]->[9.3mb]/[266.2mb]}{[survivor] 
>>>> [33.2mb]->[0b]/[33.2mb]}{[old] [3.5gb]->[3.6gb]/[3.6gb]}
>>>> [2014-05-12 0

Re: Lot of GC in elasticsearch node.

2014-05-12 Thread Abhishek Tiwari

>
> add more memory


i am doing 15 million docs, which total to ~9G. The average doc size is 
~2KB.

1. How much memory would you suggest for my use-case?
2.Also, is it prudent for me to have half of OS memory dedicated to 
elasticsearch?


On Monday, 12 May 2014 14:03:19 UTC+5:30, Mark Walkom wrote:
>
> You need to reduce your data size, add more memory or add another node.
>
> Basically, you've reached the limits of that node.
>
> Regards,
> Mark Walkom
>
> Infrastructure Engineer
> Campaign Monitor
> email: ma...@campaignmonitor.com 
> web: www.campaignmonitor.com
>  
>
> On 12 May 2014 16:38, Abhishek Tiwari >wrote:
>
>> My elasticsearch node is a AWS EC2 c3.xlarge (7.5G mem). 
>> Elasticsearch starts as-
>>
>> 498  31810 99.6 64.6 163846656 4976944 ?   Sl   06:03  26:10 
>> /usr/bin/java *-Xms4g -Xmx4g -Xss256k* -Djava.awt.headless=true 
>> -XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
>> -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
>> -XX:+HeapDumpOnOutOfMemoryError -Delasticsearch 
>> -Des.pidfile=/var/run/elasticsearch/elasticsearch.pid 
>> -Des.path.home=/usr/share/elasticsearch -cp 
>> :/usr/share/elasticsearch/lib/elasticsearch-1.1.1.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/*
>>  
>> -Des.default.path.home=/usr/share/elasticsearch 
>> -Des.default.path.logs=/var/log/elasticsearch 
>> -Des.default.path.data=/var/lib/elasticsearch 
>> -Des.default.path.work=/tmp/elasticsearch 
>> -Des.default.path.conf=/etc/elasticsearch 
>> org.elasticsearch.bootstrap.Elasticsearch
>>
>>
>> The node stopped responding (the ip:9200 status page), and so did kibana. 
>> It started working fine on a restart.
>> i have logstash format docs wherein the index rotates daily. 
>> Stats:
>>   Daily: ~11G docs, ~15  million.
>>   Total: 195G docs, ~300 million.
>>
>> The logs of the time when it stopped responding are-
>>
>> [2014-05-12 03:39:08,789][INFO ][cluster.metadata ] [Hannibal 
>> King] [logstash-2014.05.12] update_mapping [medusa_ex] (dynamic)
>> [2014-05-12 03:40:52,293][INFO ][monitor.jvm  ] [Hannibal 
>> King] [gc][old][240428][35773] duration [6.3s], collections [1]/[6.5s], 
>> total [6.3s]/[4.7h], memory [3.8gb]->[3.6gb]/[3.9gb], all_pools {[young] 
>> [150.3mb]->[1.7mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
>> [3.6gb]->[3.6gb]/[3.6gb]}
>> [2014-05-12 03:44:11,739][INFO ][cluster.metadata ] [Hannibal 
>> King] [logstash-2014.05.12] update_mapping [medusa_ex] (dynamic)
>> [2014-05-12 03:45:32,191][INFO ][monitor.jvm  ] [Hannibal 
>> King] [gc][old][240703][35812] duration [5.2s], collections [1]/[5.8s], 
>> total [5.2s]/[4.7h], memory [3.7gb]->[3.6gb]/[3.9gb], all_pools {[young] 
>> [197.4mb]->[9.3mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
>> [3.5gb]->[3.6gb]/[3.6gb]}
>> [2014-05-12 04:06:01,224][INFO ][monitor.jvm  ] [Hannibal 
>> King] [gc][old][241926][35985] duration [6s], collections [1]/[6.2s], total 
>> [6s]/[4.7h], memory [3.7gb]->[3.6gb]/[3.9gb], all_pools {[young] 
>> [134.7mb]->[9.9mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
>> [3.6gb]->[3.5gb]/[3.6gb]}
>> [2014-05-12 04:08:14,473][INFO ][monitor.jvm  ] [Hannibal 
>> King] [gc][old][242049][36004] duration [5.8s], collections [1]/[5.9s], 
>> total [5.8s]/[4.7h], memory [3.8gb]->[3.6gb]/[3.9gb], all_pools {[young] 
>> [165.1mb]->[2.7mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
>> [3.6gb]->[3.6gb]/[3.6gb]}
>> [2014-05-12 04:09:07,473][INFO ][monitor.jvm  ] [Hannibal 
>> King] [gc][old][242096][36011] duration [6.2s], collections [1]/[6.7s], 
>> total [6.2s]/[4.7h], memory [3.9gb]->[3.6gb]/[3.9gb], all_pools {[young] 
>> [265.9mb]->[2.9mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
>> [3.6gb]->[3.5gb]/[3.6gb]}
>> [2014-05-12 04:10:08,387][INFO ][monitor.jvm  ] [Hannibal 
>> King] [gc][old][242152][36020] duration [5.4s], collections [1]/[5.6s], 
>> total [5.4s]/[4.7h], memory [3.8gb]->[3.5gb]/[3.9gb], all_pools {[young] 
>> [176.5mb]->[5.8mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
>> [3.6gb]->[3.5gb]/[3.6gb]}
>> [2014-05-12 04:13:12,774][INFO ][monitor.jvm  ] [Hannibal 
>> King] [gc][old][242326][36046] duration [5.6s], collections [1]/[5.8s], 
>> total [5.6s]/[4.7h], memory [3.8gb]->[3.5gb]/[3.9gb], all_pools {[young] 
>> [167.4mb]->[12.9mb]/[266.2mb]}{[survivor] [33.2mb]->[0

Lot of GC in elasticsearch node.

2014-05-11 Thread Abhishek Tiwari
My elasticsearch node is a AWS EC2 c3.xlarge (7.5G mem). 
Elasticsearch starts as-

498  31810 99.6 64.6 163846656 4976944 ?   Sl   06:03  26:10 
/usr/bin/java *-Xms4g -Xmx4g -Xss256k* -Djava.awt.headless=true 
-XX:+UseParNewGC -XX:+UseConcMarkSweepGC 
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly 
-XX:+HeapDumpOnOutOfMemoryError -Delasticsearch 
-Des.pidfile=/var/run/elasticsearch/elasticsearch.pid 
-Des.path.home=/usr/share/elasticsearch -cp 
:/usr/share/elasticsearch/lib/elasticsearch-1.1.1.jar:/usr/share/elasticsearch/lib/*:/usr/share/elasticsearch/lib/sigar/*
 
-Des.default.path.home=/usr/share/elasticsearch 
-Des.default.path.logs=/var/log/elasticsearch 
-Des.default.path.data=/var/lib/elasticsearch 
-Des.default.path.work=/tmp/elasticsearch 
-Des.default.path.conf=/etc/elasticsearch 
org.elasticsearch.bootstrap.Elasticsearch


The node stopped responding (the ip:9200 status page), and so did kibana. 
It started working fine on a restart.
i have logstash format docs wherein the index rotates daily. 
Stats:
  Daily: ~11G docs, ~15  million.
  Total: 195G docs, ~300 million.

The logs of the time when it stopped responding are-

[2014-05-12 03:39:08,789][INFO ][cluster.metadata ] [Hannibal King] 
[logstash-2014.05.12] update_mapping [medusa_ex] (dynamic)
[2014-05-12 03:40:52,293][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][240428][35773] duration [6.3s], collections [1]/[6.5s], total 
[6.3s]/[4.7h], memory [3.8gb]->[3.6gb]/[3.9gb], all_pools {[young] 
[150.3mb]->[1.7mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.6gb]/[3.6gb]}
[2014-05-12 03:44:11,739][INFO ][cluster.metadata ] [Hannibal King] 
[logstash-2014.05.12] update_mapping [medusa_ex] (dynamic)
[2014-05-12 03:45:32,191][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][240703][35812] duration [5.2s], collections [1]/[5.8s], total 
[5.2s]/[4.7h], memory [3.7gb]->[3.6gb]/[3.9gb], all_pools {[young] 
[197.4mb]->[9.3mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.5gb]->[3.6gb]/[3.6gb]}
[2014-05-12 04:06:01,224][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][241926][35985] duration [6s], collections [1]/[6.2s], total 
[6s]/[4.7h], memory [3.7gb]->[3.6gb]/[3.9gb], all_pools {[young] 
[134.7mb]->[9.9mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.5gb]/[3.6gb]}
[2014-05-12 04:08:14,473][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242049][36004] duration [5.8s], collections [1]/[5.9s], total 
[5.8s]/[4.7h], memory [3.8gb]->[3.6gb]/[3.9gb], all_pools {[young] 
[165.1mb]->[2.7mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.6gb]/[3.6gb]}
[2014-05-12 04:09:07,473][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242096][36011] duration [6.2s], collections [1]/[6.7s], total 
[6.2s]/[4.7h], memory [3.9gb]->[3.6gb]/[3.9gb], all_pools {[young] 
[265.9mb]->[2.9mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.5gb]/[3.6gb]}
[2014-05-12 04:10:08,387][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242152][36020] duration [5.4s], collections [1]/[5.6s], total 
[5.4s]/[4.7h], memory [3.8gb]->[3.5gb]/[3.9gb], all_pools {[young] 
[176.5mb]->[5.8mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.5gb]/[3.6gb]}
[2014-05-12 04:13:12,774][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242326][36046] duration [5.6s], collections [1]/[5.8s], total 
[5.6s]/[4.7h], memory [3.8gb]->[3.5gb]/[3.9gb], all_pools {[young] 
[167.4mb]->[12.9mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.5gb]/[3.6gb]}
[2014-05-12 04:14:22,729][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242386][36057] duration [6.3s], collections [1]/[6.5s], total 
[6.3s]/[4.7h], memory [3.8gb]->[3.6gb]/[3.9gb], all_pools {[young] 
[224.2mb]->[3.5mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.6gb]/[3.6gb]}
[2014-05-12 04:15:12,192][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242431][36064] duration [5.2s], collections [1]/[5.4s], total 
[5.2s]/[4.7h], memory [3.8gb]->[3.6gb]/[3.9gb], all_pools {[young] 
[234mb]->[2.4mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.6gb]/[3.6gb]}
[2014-05-12 04:15:32,344][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242445][36067] duration [6.3s], collections [1]/[7.1s], total 
[6.3s]/[4.7h], memory [3.6gb]->[3.7gb]/[3.9gb], all_pools {[young] 
[1.2mb]->[34.7mb]/[266.2mb]}{[survivor] [33.2mb]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.6gb]/[3.6gb]}
[2014-05-12 04:15:39,627][INFO ][monitor.jvm  ] [Hannibal King] 
[gc][old][242446][36068] duration [6.7s], collections [1]/[7.2s], total 
[6.7s]/[4.7h], memory [3.7gb]->[3.7gb]/[3.9gb], all_pools {[young] 
[34.7mb]->[45.7mb]/[266.2mb]}{[survivor] [0b]->[0b]/[33.2mb]}{[old] 
[3.6gb]->[3.6gb]/[3.6gb]}
[2014-05-12 04:15:51,547][INFO ][monitor.jvm  ] [Hanniba

Changing elasticsearch index's shard-count on the next index-rotation

2014-04-23 Thread Abhishek Tiwari
 

i have an ELK(Elasticsearch-Kibana) stack wherein the elasticsearch node 
has the default shard value of 5. Logs are pushed to it in logstash format(
logstash-.MM.DD), which- correct me if i am wrong- are indexed 
date-wise.

Since i cannot change the shard count of an existing index without 
reindexing, i want to increase the number of shards to 8 when *the next 
index is created*. i figured that the ES-API allows on-the-fly persistent 
changes.

How do i go about doing this?

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/9f5f2d93-bb58-40f8-b7f1-a60239d98ce0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: Can't get second server to join cluster

2014-04-07 Thread Abhishek Tiwari


> If I start my VM first and then try to have my laptop join the cluster it 
> DOES NOT work.
>
Dave, did you eventually figure out the cause for this?



On Sunday, 3 March 2013 22:14:42 UTC+5:30, Dave O wrote:
>
> My host 192.168.1.11 is a virtual server running thru virtualbox on a 
> windows laptop.  (The virtual server is centos 6)
>
> My host 192.168.1.112 is a laptop with Centos 5.3 installed on it.
>
> I just found out that if I start elastic search on my LAPTOP server first  
> then start my virtual server  elasticsearch  second it WORKS.  I see 
> messages in the log about joining the cluster.
>
> HOWEVER,
>
> If I start my VM first and then try to have my laptop join the cluster it 
> DOES NOT work.  
>
> The iptables files on both servers are exactly identical. 
>
> Could the VM be causing an issue?  Is it okay with 2 different CENTOS 
> versions? 
>
> This is what my iptables file looks like on both servers. Any thoughts?   
> thanks again!
>
> # Firewall configuration written by system-config-securitylevel
> # Manual customization of this file is not recommended.
> *filter
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> :RH-Firewall-1-INPUT - [0:0]
> -A INPUT -j RH-Firewall-1-INPUT
> -A FORWARD -j RH-Firewall-1-INPUT
> -A RH-Firewall-1-INPUT -i lo -j ACCEPT
> -A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT
> -A RH-Firewall-1-INPUT -p 50 -j ACCEPT
> -A RH-Firewall-1-INPUT -p 51 -j ACCEPT
> -A RH-Firewall-1-INPUT -p udp --dport 5353 -d 224.0.0.251 -j ACCEPT
> -A RH-Firewall-1-INPUT -p udp -m udp --dport 631 -j ACCEPT
> -A RH-Firewall-1-INPUT -p tcp -m tcp --dport 631 -j ACCEPT
> -A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
> -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j 
> ACCEPT
> -A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 
> 9200:9400 -j ACCEPT
> -A RH-Firewall-1-INPUT -m state --state NEW -m udp -p udp --dport 
> 9200:9400 -j ACCEPT
> -A RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dport 
> 54328 -j ACCEPT
> -A RH-Firewall-1-INPUT -m state --state NEW -m multiport -p udp --dport 
> 54328 -j ACCEPT
> -A RH-Firewall-1-INPUT -m state --state NEW -m multiport -p tcp --dport 
> 9300 -j ACCEPT
> -A RH-Firewall-1-INPUT -m state --state NEW -m multiport -p udp --dport 
> 9300 -j ACCEPT
> -A OUTPUT -m state --state NEW -m multiport -p tcp --dport 54328 -j ACCEPT
> -A OUTPUT -m state --state NEW -m multiport -p udp --dport 54328 -j ACCEPT
> -A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited
> COMMIT
>
> On Sunday, March 3, 2013 3:54:06 AM UTC-5, David Pilato wrote:
>>
>> Can you reach each other with curl?
>> curl localhost:9200
>>
>> In term of firewall, open TCP 9300 ports.
>>
>> --
>> David ;-)
>> Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs
>>
>> Le 3 mars 2013 à 06:12, Dave O  a écrit :
>>
>> I've started a second server on my same home network and I cannot seem to 
>> get it to join the cluster.
>>
>> I have modified both config files to set the clustername to the same 
>> thing.
>>
>> I have tried multicast and unicast.  
>>
>> I was thinking it was a firewall issue so I added
>>
>> -A INPUT -m state --state NEW -m multiport -p tcp --dport 54328 -j ACCEPT
>> -A INPUT -m state --state NEW -m multiport -p udp --dport 54328 -j ACCEPT
>>
>> Am I missing something here.  How do I tell if they are in the same 
>> cluster.   I have looked in the start logs and see nothing. I also created 
>> a new index with 10 shards and the shards do not get put onto the 2nd 
>> server.  I also submitted a query for cluster status and only seeing 1.
>>
>> Anything else I can try?  
>>
>> my Ip addresses are 
>>
>> first - 192.168.1.112
>> second 192.168.1.11
>>
>> (Both on my internal network).   
>>
>> I can ping, ssh between the 2 boxes no problems. 
>>
>>
>>
>>
>>
>>  -- 
>> You received this message because you are subscribed to the Google Groups 
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to elasticsearc...@googlegroups.com.
>> For more options, visit https://groups.google.com/groups/opt_out.
>>  
>>  
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/6adef13f-18b9-4027-9cc1-10ccfc7c3344%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.