I don't imagine there is anything you are missing. I don't know where to go
from here. Sorry!
On Jul 24, 2014 3:20 PM, "Daniel Schonfeld" wrote:
> Hi Nikolas,
>
> We tried the following:
> {
>"persistent": {
> "cluster": {
> "routing": {
> "allocation": {
>
Hi Nikolas,
We tried the following:
{
"persistent": {
"cluster": {
"routing": {
"allocation": {
"balance": {
"index": "0.05",
"shard": "0.05",
"primary": "0.9",
"threshold": "1.0"
Got it. To be honest, I was pretty sure of that, up until this AM, when
that same OS Load spike happened again. But this time, the shards were
allocated more evenly. So I'm not sure that's even the problem any more. I
just posted a new post with more information about the load spike issue.
Woul
On Wed, Jul 23, 2014 at 9:21 AM, wrote:
> Thanks for that, Nik. I'm okay with evenly spreading all the indices,
> rather than just the one I'm having issues with. I'll give your config a
> try!
>
> Def no special configurations on that one. We didn't even realize there
> was such a thing as alloc
Thanks for that, Nik. I'm okay with evenly spreading all the indices,
rather than just the one I'm having issues with. I'll give your config a
try!
Def no special configurations on that one. We didn't even realize there was
such a thing as allocation configuration up until yesterday (after the
For the 0/6 node are you sure you don't have some configuration preventing
shards from allocating there?
We use this:
http://git.wikimedia.org/blob/operations%2Fpuppet.git/d2e2989bbafc7f7f730efacaa652a05bec3ef541/modules%2Felasticsearch%2Ftemplates%2Felasticsearch.yml.erb#L420
but its is designed
Hey guys,
We've recently set up a 5 node ES cluster, serving our 6-shards / 1-replica
index (we chose 6 back when we only had 3 nodes). We sometimes find a
highly uneven distribution of shards across the nodes. For example, when we
had 3 nodes, 4/6 of the index lived on 1 node, 2/6 lived on ano