This is the cluster setup I have 5 x m3.2xlarge instaces (240G general
purpose ssd backed ebs volume for each instances). I have allocated 22 G of
30 G for elasticsearch (with mlockall option set). Initially I had 5 x
m3.xlarge instaces but they were crashing because of oom, so I ended up
doubl
of different timestamps, with
> millisecond resolution, are a real burden for searching on inverted
> indices. A good discretization strategy for indexing is to reduce the total
> amount of values in such field to a few hundred or thousands. For
> timestamps, this means, indexing
d to keep all the values in memory in order
> to start them, causing memory problems. In general, Lucene is not effective
> at deep pagination. Use scan/scroll:
>
>
> http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-scroll.html
>
> --
>
s of different timestamps, with
> millisecond resolution, are a real burden for searching on inverted
> indices. A good discretization strategy for indexing is to reduce the total
> amount of values in such field to a few hundred or thousands. For
> timestamps, this means, indexing time
p
> optimize it.
>
> Cheers,
>
> Ivan
>
>
> On Fri, Aug 22, 2014 at 12:28 AM, Narendra Yadala <
> narendra.yad...@gmail.com> wrote:
>
>> I have a cluster of size 240 GB including replica and it has 5 nodes in
>> it. I allocated 5 GB RAM (tota
I have a cluster of size 240 GB including replica and it has 5 nodes in it.
I allocated 5 GB RAM (total 5*5 GB) to each node and started the cluster.
When I start continuously firing queries on the cluster the GC starts
kicking in and eventually node goes down because of OutOfMemory exception.