I am curious about this as well.  I generally have been using about a third
of available memory for the java heap, so I keep 50gb/150 available for the
jvm.  Think this should be reduced?

On Wed, May 10, 2017 at 6:36 PM, Toke Eskildsen <t...@kb.dk> wrote:

> S G <sg.online.em...@gmail.com> wrote:
> > *Rough estimates for an initial size:*
> >
> > 50gb index is best served if all of it is in memory.
>
> Assuming you need low latency and/or high throughput, yes. I mention this
> because in many cases the requirements for number of simultaneous users and
> response times are known (at least roughly) up front and sometimes there is
> no need to speculate in high performance.
>
> > And JVMs perform the best if their max-heap is between 15-20gb
>
> We stay below 32GB if possible, but the gist is the same: Avoid large
> heaps.
>
> > So a starting point for num-shards: 50gb/20gb ~ 3
>
> Sorry, I think you have misunderstood something here. The JVM heap is not
> used for caching the index data directly (although it holds derived data).
> What you need is free memory on your machine for OS disk-caching.
>
> The ideal JVM size is extremely dependent on how you index, query and
> adjust the filter-cache (secondarily the other caches, but the filter-cache
> tends to be the large one).  A heap of 10GB might very well be fine for
> handling your whole 50GB index. If that is on a 64GB machine, the remaining
> 54GB of RAM (minus the other stuff that is running) ought to ensure a fully
> cached index.
>
> - Toke Eskildsen
>

Reply via email to