Big first question - what does the heap growth look like over time?
Eviction doesn't completely remove an entry and so you may still need to
address this via expiration or periodic destroys to insure sufficient
memory for the queries.

I would be cautious about running with eviction all the time as this should
be treated more like a safety valve to address spikes in operations. The
main concern is that a big OQL query can pull in from disk the entries and
cause an OoME if not well tested at the expected limits for normal
operations. The second concern is that performance can be significantly
effected by having a large percentage of data on disk and a slow or
overloaded disk subsystem can cause issues with the members.

CompressedOOP's should be turned on and depending on the JVM may be on by
default and in most cases this provides better performance (depends on heap
size).

As noted GC should start prior to eviction as it should try to free up JVM
memory space before resorting to eviction if at all possible.

Depending on testing you may want to increase the critical-heap-percentage
to 95% as this should be a critical stopping point and ideally you don't
want to ever reach this point.

What GC are selecting CMS or G1? If you are using CMS, then you will need
to tune the NewSize min and max for the best performance especially if you
are using lots of queries.  If time permits try testing  both as some
applications do better with one versus the other.


*Vince Ford*
GemFire Sustenance Engineering
Beaverton, OR USA
503-533-3726 (office)
http://www.pivotal.io
Open Source Project Geode https://geode.incubator.apache.org/
<https://network.pivotal.io/products/project-geode>

On Fri, Jun 26, 2015 at 6:24 PM, Real Wes Williams <[email protected]>
wrote:

> I’d like to verify best practices for setting eviction threshold settings.
> There’s not much written on it. I’m following guidelines at:
>
> https://pubs.vmware.com/vfabric5/index.jsp?topic=/com.vmware.vfabric.gemfire.6.6/managing/heap_use/controlling_heap_use.html
> and hoping that they are still current.
>
> I have about 750GB of data, 1/2 historical on disk and 1/2 active in
> memory in a cluster of servers with 36GB RAM and 28GB heaps (20% overhead).
> The read/write ratio is about 60%/40% and lots of OQL queries, which need
> memory space to run. A small percentage of the queries will hit disk. I'm
> thinking that I want to give Java 50% overhead.  Based on the above, here
> is what I am thinking:
>
> 20% overhead between RAM limit and heap  (36GB RAM with 28GB heap)  -
> why? Java needs memory for its own use outside the heap.
>
> -compressedOops     -why? Heap is < 32GB and this gives me more space.
> Space is more valuable than speed in this instance.
>
> --eviction-heap-percentage=50             - why? I want to start evicting
> around 14GB, which gives the JVM 50% headroom. I found that when I raised
> this to 70% I was getting OOM exceptions with several OQL queries. I'm
> thinking of lowering this even to 40. Tradeoffs?
>
> -CMSInitiatingOccupancyFraction=40   - why? I want the GC to be working
> when eviction starts. This is from point 3 in the above link under "Set
> the JVM's GC Tuning Parameters
> ​"​
>
> --critical-heap-percentage=90
>
>
> Would you determine the above a general best-practice approach?
>
> *Wes Williams | Sr. **Data Engineer*
> 781.606.0325
> http://pivotal.io/big-data/pivotal-gemfire
>

Reply via email to