Same symptom for me, neither OOM, nor full disk space, only an ES 
restart... 

Le lundi 17 mars 2014 14:00:11 UTC+1, bizzorama a écrit :
>
> No, when things are running everything is ok, indexes break during 
> restart/powerdown
> 17-03-2014 13:11, "Clinton Gormley" <cl...@traveljury.com <javascript:>> 
> napisał(a):
>
>> Are you sure you didn't run out of disk space or file handles at some 
>> stage, or have an OOM exception?
>>
>>
>> On 16 March 2014 16:37, bizzorama <bizz...@gmail.com <javascript:>>wrote:
>>
>>> Hi,
>>>
>>> it turned out that it was not a problem of ES version (we tested on both 
>>> 0.90.10 and 0.90.9) but just a ES bug ...
>>> after restarting pc or even just the service indices got broken ... we 
>>> found out that this was the case of missing mappings.
>>> We observed that broken indices had their mappings corrupted (only some 
>>> default fields were observed).
>>> You can check this by calling: http:\\es_address:9200\indexName\_mapping
>>>
>>> Our mappings were dynamic (not set manually - just figured out by ES 
>>> when the records were incoming).
>>>
>>> The solution was to add a static mapping file like the one described 
>>> here:
>>>
>>> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/mapping-conf-mappings.html(we
>>>  added the default one).
>>>
>>> I just copied mappings from a healty index, made some changes, turned it 
>>> to a mapping file and copied to the ES server.
>>>
>>> Now everything works just fine.
>>>
>>> Regards,
>>> Karol
>>>
>>>
>>> W dniu niedziela, 16 marca 2014 14:54:00 UTC+1 użytkownik Mac Jouz 
>>> napisał:
>>>
>>>>
>>>> Hi Bizzorama,
>>>>
>>>> I had a similar problem with the same configuration than you gave.
>>>> ES ran since the 11th of February and was fed every day at 6:00 AM by 2 
>>>> LS.
>>>> Everything worked well (kibana reports were correct and no data loss) 
>>>> until
>>>> I restarted yesterday ES :-(
>>>> Among 30 index (1 per day), 4 were unusable and data within kibana 
>>>> report
>>>> for the related period were unavailable (same org.elasticsearch.search.
>>>> facet.FacetPhaseExecutionException: Facet[0]: (key) field [@timestamp] 
>>>> not found)
>>>>
>>>> Do you confirm when you downgraded ES to 0.90.9 that you retrieved your 
>>>> data
>>>> (i.e you was able to show your data in kibana reports) ?
>>>>
>>>> I will try to downgrade ES version as you suggested and will let you 
>>>> know
>>>> more      
>>>>
>>>> Thanks for your answer 
>>>>
>>>>
>>>>
>>>> Sorry for the delay.
>>>>
>>>> Looks like you were right, after downgrading ES to 0.90.9 i couldn't 
>>>> reproduce the issue in such manner.
>>>>
>>>> Unfortunately, I found some other problems, and one looks like a 
>>>> blocker ....
>>>>
>>>> After whole ES cluster powerdown, ES just started replaying 'no mapping 
>>>> for ... <name of field>'  for each request.
>>>>
>>>> W dniu czwartek, 20 lutego 2014 16:42:20 UTC+1 użytkownik Binh Ly 
>>>> napisał:
>>>>>
>>>>> Your error logs seem to indicate some kind of version mismatch. Is it 
>>>>> possible for you to test LS 1.3.2 against ES 0.90.9 and take a sample of 
>>>>> raw logs from those 3 days and test them through to see if those 3 days 
>>>>> work in Kibana? The reason I ask is because LS 1.3.2 (specifically the 
>>>>> elasticsearch output) was built using the binaries from ES 0.90.9.
>>>>>
>>>>> Thanks.
>>>>>
>>>>
>>>> Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :
>>>>>
>>>>> Hi,
>>>>>
>>>>> I've noticed a very disturbing ElasticSearch behaviour ...
>>>>> my environment is:
>>>>>
>>>>> 1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch 
>>>>> (0.90.10) + kibana
>>>>>
>>>>> which process about 7 000 000 records per day,
>>>>> everything worked fine on our test environment, untill we run some 
>>>>> tests for a longer period (about 15 days).
>>>>>
>>>>> After that time, kibana was unable to show any data.
>>>>> I did some investigation and it looks like some of the indexes (for 3 
>>>>> days to be exact) seem to be corrupted.
>>>>> Now every query from kibana, using those corrupted indexes - failes.
>>>>>
>>>>> Errors read from elasticsearch logs:
>>>>>
>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException: 
>>>>> Facet[terms]: failed to find mapping for Name* ... a couple of other 
>>>>> columns*
>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet 
>>>>> [0]: (key) field [@timestamp] not found
>>>>>
>>>>> ... generaly all queries end with those errors
>>>>>
>>>>> When elasticsearch is started we find something like this:
>>>>>
>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name] 
>>>>> Message not fully read (request) for [243445] and action 
>>>>> [cluster/nodeIndexCreated], resetting
>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name] 
>>>>> Message not fully read (request) for [249943] and action 
>>>>> [cluster/nodeIndexCreated], resetting
>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name] 
>>>>> Message not fully read (request) for [246740] and action 
>>>>> [cluster/nodeIndexCreated], resetting
>>>>>
>>>>> And a little observations:
>>>>>
>>>>> 1. When using elasticsearch-head plugin, when querying records 
>>>>> 'manually', i can see only elasticsearch columns (_index, _type, _id, 
>>>>> _score).
>>>>>     But when I 'randomly' select columns and overview their raw json 
>>>>> they look ok.
>>>>>
>>>>> 2, When I tried to process same data again - everything is ok 
>>>>>
>>>>> Is it possible that some corrupted data found its way to elasticsearch 
>>>>> and now whole index is broken ?
>>>>> Can this be fixed ? reindexed or sth ?
>>>>> This data is very importand and can't be lost ...
>>>>>
>>>>> Best Regards,
>>>>> Karol
>>>>>
>>>>>
>>>> Le mardi 11 février 2014 13:18:01 UTC+1, bizzorama a écrit :
>>>>>
>>>>> Hi,
>>>>>
>>>>> I've noticed a very disturbing ElasticSearch behaviour ...
>>>>> my environment is:
>>>>>
>>>>> 1 logstash (1.3.2) (+ redis to store some data) + 1 elasticsearch 
>>>>> (0.90.10) + kibana
>>>>>
>>>>> which process about 7 000 000 records per day,
>>>>> everything worked fine on our test environment, untill we run some 
>>>>> tests for a longer period (about 15 days).
>>>>>
>>>>> After that time, kibana was unable to show any data.
>>>>> I did some investigation and it looks like some of the indexes (for 3 
>>>>> days to be exact) seem to be corrupted.
>>>>> Now every query from kibana, using those corrupted indexes - failes.
>>>>>
>>>>> Errors read from elasticsearch logs:
>>>>>
>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException: 
>>>>> Facet[terms]: failed to find mapping for Name* ... a couple of other 
>>>>> columns*
>>>>> - org.elasticsearch.search.facet.FacetPhaseExecutionException: Facet 
>>>>> [0]: (key) field [@timestamp] not found
>>>>>
>>>>> ... generaly all queries end with those errors
>>>>>
>>>>> When elasticsearch is started we find something like this:
>>>>>
>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name] 
>>>>> Message not fully read (request) for [243445] and action 
>>>>> [cluster/nodeIndexCreated], resetting
>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name] 
>>>>> Message not fully read (request) for [249943] and action 
>>>>> [cluster/nodeIndexCreated], resetting
>>>>> [2014-02-07 15:02:08,147][WARN ][transport.netty          ] [Name] 
>>>>> Message not fully read (request) for [246740] and action 
>>>>> [cluster/nodeIndexCreated], resetting
>>>>>
>>>>> And a little observations:
>>>>>
>>>>> 1. When using elasticsearch-head plugin, when querying records 
>>>>> 'manually', i can see only elasticsearch columns (_index, _type, _id, 
>>>>> _score).
>>>>>     But when I 'randomly' select columns and overview their raw json 
>>>>> they look ok.
>>>>>
>>>>> 2, When I tried to process same data again - everything is ok 
>>>>>
>>>>> Is it possible that some corrupted data found its way to elasticsearch 
>>>>> and now whole index is broken ?
>>>>> Can this be fixed ? reindexed or sth ?
>>>>> This data is very importand and can't be lost ...
>>>>>
>>>>> Best Regards,
>>>>> Karol
>>>>>
>>>>>  -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com <javascript:>.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.com<https://groups.google.com/d/msgid/elasticsearch/2f25e8f8-7c62-4474-a7a4-ee64a433eeca%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>>
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>  -- 
>> You received this message because you are subscribed to a topic in the 
>> Google Groups "elasticsearch" group.
>> To unsubscribe from this topic, visit 
>> https://groups.google.com/d/topic/elasticsearch/7ZwB6SNFkDc/unsubscribe.
>> To unsubscribe from this group and all its topics, send an email to 
>> elasticsearc...@googlegroups.com <javascript:>.
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.com<https://groups.google.com/d/msgid/elasticsearch/CAPt3XKSE6%3DX-QjwJYzs0Aq7FXOHbE6Vu8Am-5BGvznmAsbFikw%40mail.gmail.com?utm_medium=email&utm_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/22a9d9a3-d7cf-4b87-8244-b027387005f4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to