Hey,

there must be some log files which contain some reasons, why a shard
suddenly could not be assigned anymore. Can you check, whether you can dig
up any information in the logs, why your setup worked so long and then
suddenly didnt...

Also when you create an index manually (curl -X PUT node:9200/mytest) -
does this index get correctly created and assigned?


--Alex


On Thu, Dec 19, 2013 at 5:39 PM, Honza <[email protected]> wrote:

> Hello,
> thank you for the answer.
> I didn't do any specific configuration or anything nonstandard. But I had
> the problem I have mentioned, that there was set 1 replica by default and I
> had only one node, which I resolved with second node. Then I had some
> troubles with too many open files limit and that I solved with setting
> ulimit.
>
> Then it worked few weeks without problem, but then the shards started to
> be unassigned. I found it yesterday, but because there are made indicies by
> logstash every day, I know that shards older then 5 days are ok, but newer
> are not (I should mentioned that
> the index.routing.allocation.total_shards_per_node is always -1).
>
> So I tried the thing with removing replica and adding nodes, but it made
> it worse.
>
> But the point is, that the shards seem to be all right. They are not
> assigned only. So I only need to assigne them to the one node and I think
> it will be ok. Do you think is it possible, please?
>
> Thank you
>
> Dne čtvrtek, 19. prosince 2013 16:39:41 UTC+1 Alexander Reelsen napsal(a):
>>
>> Hey,
>>
>> did you do any allocation specific configuration? Disabling allocation at
>> all? Anything in your cluster settings or in your configuration? Did you do
>> any configuration before firing up your cluster or remember setting any
>> special option?
>>
>> Can you reproduce this when you set up a new system, so we could
>> reproduce this behaviour locally as well?
>>
>>
>> --Alex
>>
>>
>>
>> On Thu, Dec 19, 2013 at 3:17 PM, Honza <[email protected]> wrote:
>>
>>> Hi there,
>>> few weeks ago I had a problem with some unassigned shards, where I had
>>> same number of unassigned and assigned and I solved that thanks to an
>>> advice 
>>> (here<https://groups.google.com/forum/#!searchin/elasticsearch/unassigned$20shards/elasticsearch/Y2QQ-G0hICM/weIznt5PkKQJ>)
>>> with adding a new node.
>>> But now it appeared another problem. I had unassigned shards even 2
>>> nodes were running. So I decided to turn off the replica. That caused
>>> disappearing of half shards (of course), but some unassigned were still
>>> remaining.
>>>
>>> So I tried to add few new nodes. I ended on 10 nodes. Thanks to that
>>> some shards disappeared, but most of them did not. But if I turn them down
>>> and then start only one node again (it should be enough without replica),
>>> this is the health status:
>>>
>>> {
>>>   "cluster_name" : "elasticsearch",
>>>   "status" : "red",
>>>   "timed_out" : false,
>>>   "number_of_nodes" : 2,
>>>   "number_of_data_nodes" : 1,
>>>   "active_primary_shards" : 22,
>>>   "active_shards" : 22,
>>>   "relocating_shards" : 0,
>>>   "initializing_shards" : 0,
>>>   "unassigned_shards" : 198
>>> }
>>>
>>> So 1/10 was assigned and 9/10 was not. It seems the shards are still
>>> "connected" to old nodes and I have to reroute them to the only one node.
>>>
>>> But I couldn't do it.
>>> I used 
>>> this<http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html>
>>>  webpage
>>> and following code:
>>>
>>> curl -XPOST 'localhost:9200/_cluster/reroute' -d '{
>>>     "commands" : [ {
>>>           "allocate" : {
>>>               "index" : "logstash-2013.12.10",
>>>               "shard" : 0,
>>>               "node" : "Q6hyVtoPTrSxm_xIGTg3CQ",
>>>               "allow_primary": 1
>>>           }
>>>         }
>>>     ]
>>> }'
>>>
>>> (first without allow_primary, but it throws error "trying to allocate a
>>> primary shard [logstash-2013.12.10][0]], which is disabled" so I used the
>>> allow_primary flag), but also throws some exception:
>>>
>>> org.elasticsearch.index.gateway.IndexShardGatewayRecoveryException:
>>> [logstash-2013.12.10][0] shard allocated for local recovery (post api),
>>> should exists, but doesn't
>>>
>>> So I really don't know, what can I do or if I am doing right steps.
>>> Can somebody give me an advice, please?
>>>
>>> Thank you
>>>
>>>  --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to [email protected].
>>>
>>> To view this discussion on the web visit https://groups.google.com/d/
>>> msgid/elasticsearch/d0585313-ee35-435b-b530-ff8389a2577c%
>>> 40googlegroups.com.
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/fb737aa1-313c-4ab4-8bbe-3a1898f5288c%40googlegroups.com
> .
>
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAGCwEM_%3DnugTEks6D5ya1fwjQhMdJzY%2Bq39pVdGDLzY6hCvdQg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to