If you kill Elasticsearch immediately after creating the index, you
interrupt the process of shard allocation. When you restart Elasticsearch,
it assumes that the shards have been allocated somewhere and so doesn't try
to assign new shards to prevent any data loss.

The index itself isn't corrupt, it is just missing primary shards.  You can
force the missing shards to be allocated using the cluster reroute API
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/cluster-reroute.html-
you will need to set allow_primary to true.





On 26 December 2013 05:36, Tarang Dawer <tarang.da...@gmail.com> wrote:

> i have reliably recreated this many times, happens while creating index on
> a single node, (default 5 shards). i have set  "action.auto_create_index:
> false" , "discovery.zen.ping.multicast.enabled: false" & "node.master=true"
> so i am creating indices via java API, . i kill(Kill -9 ) the elasticsearch
> immediately after the index is created. when i restart the elasticsearch,
> out of the 5 primary shards, it shows 3/4 shards in a corrupt state, with
> "503" status code.
>
>
> On Thu, Dec 26, 2013 at 3:58 AM, Alexander Reelsen <a...@spinscale.de>wrote:
>
>> Hey,
>>
>> can you reliably recreate this? And try to create a gist? Preferrably,
>> when using elasticsearch 0.90.9. When you create an index, you usually
>> create 5 shards, so, how can 3/4 shards be corrupt? Did you change anything
>> and do not use the defaults (are you changing the defaults somewhere else
>> as well)? It would be great if you could provide a reproducible example
>> using a gist, like mentioned in http://www.elasticsearch.org/help
>>
>>
>> --Alex
>>
>>
>> On Wed, Dec 25, 2013 at 4:35 PM, Tarang Dawer <tarang.da...@gmail.com>wrote:
>>
>>> Hi all
>>> I am facing an issue of corrupt index creation, whenever the es node is
>>> killed just after the index is created. When the node is restarted, the
>>> index shows 3/4 shards corrupt, with 503 status, which never recover, and
>>> as a result, my indexing gets stuck. I am doing this on a single node, with
>>> es version 90.1 . Please help me out.
>>>
>>> Thanks
>>> Tarang Dawer
>>>
>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to elasticsearch+unsubscr...@googlegroups.com.
>>> To view this discussion on the web visit
>>> https://groups.google.com/d/msgid/elasticsearch/CAEGWFQVihK%3DzJ9wE%2BzufhOSoOHg97g-FM6FFnqtw8JCpYO4VCQ%40mail.gmail.com
>>> .
>>> For more options, visit https://groups.google.com/groups/opt_out.
>>>
>>
>>  --
>> You received this message because you are subscribed to the Google Groups
>> "elasticsearch" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to elasticsearch+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/elasticsearch/CAGCwEM-ggGZXBRj_LwNc0ksEg-dcRDTZ-bKH_9AqJmOLi9A4UQ%40mail.gmail.com
>> .
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
>  --
> You received this message because you are subscribed to the Google Groups
> "elasticsearch" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to elasticsearch+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/elasticsearch/CAEGWFQUrAfA6Qe_NDEQ9SwRWmOK%2BkSLY%3Dq%3D9syENAZ_yx3B5yg%40mail.gmail.com
> .
>
> For more options, visit https://groups.google.com/groups/opt_out.
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/CAPt3XKQ%2BSm_%2Bu-Amtq2X237ObROnaqWcwqnJt%3DoBEWVYC7vZGw%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to