Hey guys,
Is there any way to prevent the our elasticsearch cluster from becoming
unbalanced during a rolling restart?
We are correctly following the advice
here:
http://www.elastic.co/guide/en/elasticsearch/guide/master/_rolling_restarts.html
However, the issue is that when we bring a node
https://github.com/elasticsearch/elasticsearch-definitive-guide/pull/285
On Friday, December 19, 2014 1:01:53 PM UTC-8, Nikolas Everett wrote:
>
> I believe so.
>
>>
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this g
I believe so.
On Fri, Dec 19, 2014 at 3:39 PM, wrote:
>
>
>
> On Friday, December 19, 2014 12:31:33 PM UTC-8, Nikolas Everett wrote:
>>
>> You have to reenable allocation after the node comes back and wait for
>> the shards to initialize there.
>>
>
> So this means the tutorial is wrong (current
On Friday, December 19, 2014 12:31:33 PM UTC-8, Nikolas Everett wrote:
>
> You have to reenable allocation after the node comes back and wait for the
> shards to initialize there.
>
So this means the tutorial is wrong (current version):
2. Disable allocation
3. stop node
4. ...
5. start node
6
You have to reenable allocation after the node comes back and wait for the
shards to initialize there.
On Fri, Dec 19, 2014 at 3:23 PM, wrote:
>
> I'm maintaining a small cluster of 9 nodes, and was trying to perform
> rolling restart as outlined here:
> http://www.elasticse
I'm maintaining a small cluster of 9 nodes, and was trying to perform
rolling restart as outlined
here:
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_rolling_restarts.html#_rolling_restarts
The problem is that after I disable reallocation and restart a single node
I'm not sure if putting the cluster in readonly mode will help. I can't do
that with my system so I can't test it.
I'd be _much_ happier if it only took a minute or two to perform a restart
on each node rather than the hours it can take.
Nik
On Mon, Nov 10, 2014 at 2:02 PM, lagarutte via elasti
ok thank for your explanation.
It's a major concern as ELS is scalabe and when a node goes down, we have a
rebalancing process which can take lot ot time.
i find it strange that this point has not been adressed long time ago
I think with a big cluster (>100 nodes) then the cluster is permanently
You've followed the right procedure. The problem is that Elasticsearch
doesn't always restore the shards back on the node that they came from. If
the restarted shard and the current master shard have diverge at all it'll
have to sync files _somewhere_ to make sure that the restarted shard gets
al
Reallocation to all nodes is the expected behavior.
Jörg
On Mon, Nov 10, 2014 at 3:55 PM, lagarutte via elasticsearch <
elasticsearch@googlegroups.com> wrote:
> Hi,
> i have one ELS 1.1.2 cluster with 7 nodes.
> 800GB data.
>
> When i shutdown a node for various reasons, ELS automatically rebala
Hi,
i have one ELS 1.1.2 cluster with 7 nodes.
800GB data.
When i shutdown a node for various reasons, ELS automatically rebalance the
missing shard on the other node.
To prevent this, I tried this (specified in the official doc) :
"transient" : {
"cluster.routing.allocation.enable" : "n
elasticsearch service restart reports itself
> complete. This pushes the cluster into a red state due to muliple data
> nodes being restarted at once, and can cause performance problems.
>
> Solution:
>
> Perform a rolling restart of each elasticsearch node and wait for the
&g
I've come up with what I think is a safe way to rolling restart an
Elasticsearch cluster using Ansible handlers.
Why is this needed?
Even if you use a serial setting to limit the number of nodes processed
at one time, Ansible will restart elasticsearch nodes and continue
processing as
hat is exactly what I'm doing. For some reason the cluster reports as
> green even though an entire node is down. The cluster doesn't seem to
> notice the node is gone and change to yellow until many seconds later. By
> then my rolling restart script has already gotten to the secon
ing. For some reason the cluster reports as
> green even though an entire node is down. The cluster doesn't seem to
> notice the node is gone and change to yellow until many seconds later. By
> then my rolling restart script has already gotten to the second node and
> killed it beca
That is exactly what I'm doing. For some reason the cluster reports as
green even though an entire node is down. The cluster doesn't seem to
notice the node is gone and change to yellow until many seconds later. By
then my rolling restart script has already gotten to the second node a
_for_node_to_rejoin()
> cluster_enable_allocation()
> wait_for_cluster_status_green()
> fi
> done
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/modules-cluster.html
>
> /petter
>
>
> On Tue, Apr 1, 2014 at 6:19 PM, Mik
Deeks wrote:
> What is the proper way of performing a rolling restart of a cluster? I
> currently have my stop script check for the cluster health to be green
> before stopping itself. Unfortunately this doesn't appear to be working.
>
> My setup:
> ES 1.0.0
> 3 node c
What is the proper way of performing a rolling restart of a cluster? I
currently have my stop script check for the cluster health to be green
before stopping itself. Unfortunately this doesn't appear to be working.
My setup:
ES 1.0.0
3 node cluster w/ 1 replica.
When I perform the ro
19 matches
Mail list logo