I have had experience with such but _not_ without data loss. The reality
is that some data loss has already occurred. I am not aware of any ES
solution that will allow you to retrieve what data remains, without further
data loss, and restore the index to green status. I have seen reference to
so
I'd like to thank everyone for their help & suggestions.
So the problem was a ES version missmatch. I had a cluster of two ES
servers and one of them was running ES 1.4 and the other - 1.3. This is
what prevented the index replicas from assigning to the second node. I've
since updated the ES ver
Noticed this happening on a cluster this week which had reached 85%, the
full disk watermark.
On Thursday, April 2, 2015 at 3:29:18 PM UTC-6, Mark Walkom wrote:
>
> Take a look in your ES logs, it should have something of use.
>
> You can also try dropping the replicas to 0 for the indices that a
Take a look in your ES logs, it should have something of use.
You can also try dropping the replicas to 0 for the indices that are having
these problems, then readd them one index at a time.
On 2 April 2015 at 23:42, Darius wrote:
> Hello,
>
> Quick description of the problem: I had 1 elasticse
Hello,
Quick description of the problem: I had 1 elasticsearch server cluster with
0 replicas for all indexes, then added a second ES server to the cluster
and then set that all existing indexes should have 1 replica (instead of
the 0) .
After doing this, in the _cat/shards page I could see tha