Removing the -XX+UseCMSInitiatingOccupancyOnly flag extended the time it
took before the JVM started full GC's from about 2 hours to 7 hours in my
cluster, but now it's back to constant full GC's. I'm out of ideas.
Suggestions?
mike
On Monday, June 23, 2014 10:25:20 AM UTC-4, Michael Hart
You may want to try upgrading ES - release notes for 1.2.0 indicate a
change wrt throttling indexing when merges fall behind and earlier release
notes post 1.1.0 have notes about a potential memory leak fix among many
other improvements and fixes.
Best I can think of :|
Bruce
--
You
Mike - The above sounds like happened due to machines sending too many
indexing requests and merging unable to keep up pace. Usual suspects would
be not enough cpu/disk speed bandwidth.
This doesn't sound related to memory constraints posted in the original
issue of this thread. Do you see
Java 8 with G1GC perhaps? It'll have more overhead but perhaps it'll be
more consistent wrt pauses.
On Wednesday, June 18, 2014 2:02:24 PM UTC-4, Eric Brandes wrote:
I'd just like to chime in with a me too. Is the answer just more
nodes? In my case this is happening every week or so.
I'd just like to chime in with a me too. Is the answer just more nodes?
In my case this is happening every week or so.
On Monday, April 21, 2014 9:04:33 PM UTC-5, Brian Flad wrote:
My dataset currently is 100GB across a few daily indices (~5-6GB and 15
shards each). Data nodes are 12 CPU,
We have a 4 node (2 client only, 2 data/master nodes with 25G memory
allocated to ES and 12 cores each) ES cluster, storing an index with 16
shards, ~200GB and 1 replica.
Recently running scan/scroll requests to dump data and other faceting
requests, the nodes disconnected from each and we