Re: elasticsearch deployment advise

2014-11-27 Thread Nikolas Everett
We have 128gb on some nodes and run 30gb heaps. Lucene memory maps files so the extra memory would be put to good use. The 32gb memory limit comes from the JVM compressing pointers. It can't compress after 32 and so you see everything expand in size. On Nov 27, 2014 4:18 PM, "Denis J. Cirulis" wro

Re: elasticsearch deployment advise

2014-11-27 Thread Denis J. Cirulis
20-30Gb per index a day. I've read in setup guide that heap more than 32Gb is useless, that's why I'm asking. -- You received this message because you are subscribed to the Google Groups "elasticsearch" group. To unsubscribe from this group and stop receiving emails from it, send an email to

Re: elasticsearch deployment advise

2014-11-27 Thread Mark Walkom
1 - Depends on your use. 2 - Yes there are, see http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules-fielddata.html#circuit-breaker On 28 November 2014 at 07:17, Denis J. Cirulis wrote: > Hello, > > I'm in need to plan a new deployment of elasticsearch. Single node

elasticsearch deployment advise

2014-11-27 Thread Denis J. Cirulis
Hello, I'm in need to plan a new deployment of elasticsearch. Single node, 128GB ram, for log indexing (amount 50 million records a day) 1. What's the best heap size for elasticsearch 1.4 (running Oracle java 7u72) ? 2. Is there some kind of query throttling technique to stop deep drill downs