Today the ES cluster still works as expected.
Still don't know the reason why it failed in the first place or what I did
to fix it.
Maybe a slow cluster restart helped: stopping all nodes and then starting
only one node so it can become master instead of restarting all at once and
letting them
Yesterday I set the replica count to 0 with
curl -XPUT $(hostname -f):9200/_settings -d'{'index': {
'number_of_replicas':0}}'
and today the ES cluster assigned the new shards as it should.
I have now set the replica count back to 1 and will see if that's the
problem tomorrow.
On Tuesday, Apr
Hi Mark,
I forgot to write it again in this mail, but in the gist I pasted the full
logs when the ES cluster created the new indices until I tried to restart
the current active master.
# head es_cluster.log
[2014-04-14 02:00:01,504][INFO ][cluster.metadata ] [es@log01]
[logstash-2014.0
Check your ES logs, there may be something there.
Regards,
Mark Walkom
Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com
On 15 April 2014 22:20, Andreas Paul wrote:
> Hello there,
>
> on Monday morning our ES cluster cluster switched to red
Hello there,
on Monday morning our ES cluster cluster switched to red, because he didn't
assign the new created indices to any ES node, see attached picture.
I tried manually allocating these unassigned shards to a node, but it only
returned the following error:
# curl -XPOST $(hostname -f):