anyone?

Середа, 19 листопада 2014 р. 13:32:37 UTC+1 користувач Serg Fillipenko 
написав:
>
> We have contact profiles (20+ fields, containing nested documents) indexed 
> and their social profiles(10+ fields) indexed as child documents of contact 
> profile.
> We run complex bool match queries, delete by query, delete children by 
> query, faceting queries on contact profiles.
> index rate 14.31op/s
> remove by query rate  13.41op/s (such high value caused by fact we delete 
> all child docs first before indexing of parent and then we index children 
> again)
> search rate 2.53op/s
> remove by ids 0.15op/s
>
> We started to face this trouble under ES 1.2 but just after we started to 
> index and delete (no searching requests yet) child documents. On ES 1.4 we 
> have the same issue.
>
>
> What sort of data is it, what sort of queries are you running and how 
>> often are they run?
>>
>> On 19 November 2014 17:52, tetlika <tet...@gmail.com> wrote:
>>
>>> hi,
>>>
>>> we have 6 servers and 14 shards in cluster, the index size 26GB, we have 
>>> 1 replica so total size is 52GB, and ES v1.4.0, java version "1.7.0_65"
>>>
>>> we use servers with RAM of 14GB (m3.xlarge), and heap is set to 7GB
>>>
>>> around week ago we started facing next issue:
>>>
>>> random cluster servers around once per day/two are hitting the heap size 
>>> limit (java.lang.OutOfMemoryError: Java heap space) in log, and cluster is 
>>> failing - becomes red or yellow
>>>
>>> we tried adding more servers to cluster - even 8, but than it's a matter 
>>> of time when we'll hit the problem, so looks no matter how many servers are 
>>> in cluster - it will still hit the limit after some time
>>>
>>> before we started facing the problem we were running smoothly with 3 
>>> servers
>>> also we set indices.fielddata.cache.size:  40% but it didnt helped
>>>
>>> also, there are possible workarounds to decrease heap usage:
>>>
>>> 1) reboot some server - than heap becomes under 70% and for some time 
>>> cluster is ok
>>>
>>> or
>>>
>>> 2) decrease number of replicas to 0, and than back to 1
>>>
>>> but I dont like to use those workarounds
>>>
>>> how it can happen while all index can fit into RAM it can run out of it?
>>>
>>> thanks much for possible help
>>>
>>> -- 
>>> You received this message because you are subscribed to the Google 
>>> Groups "elasticsearch" group.
>>> To unsubscribe from this group and stop receiving emails from it, send 
>>> an email to elasticsearc...@googlegroups.com.
>>> To view this discussion on the web visit 
>>> https://groups.google.com/d/msgid/elasticsearch/2ae23017-fde7-4b10-b31b-39076b079f10%40googlegroups.com
>>>  
>>> <https://groups.google.com/d/msgid/elasticsearch/2ae23017-fde7-4b10-b31b-39076b079f10%40googlegroups.com?utm_medium=email&utm_source=footer>
>>> .
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>>

-- 
You received this message because you are subscribed to the Google Groups 
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to elasticsearch+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/elasticsearch/25ffd149-b6ce-45b3-a702-faa512b33f6a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to