We have SOLR(7.0.1) cloud 3 VM Linux instances wit 4 CPU, 90 GB RAM with
zookeeper (3.4.11) ensemble running on the same machines. We have 130 cores of
overall size of 45GB. No Sharding, almost all VMs has the same copy of data.
These nodes are under LB.

http://lucene.472066.n3.nabble.com/SOLR-Cloud-1500-threads-are-in-TIMED-WAITING-status-td4383636.html
- Refer this for Merge and commit configs

JVM Heap Size : 15GB

Optimize : Daily Once

GC Config: Default

-XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintHeapAtGC
-XX:+PrintTenuringDistribution
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseConcMarkSweepGC
-XX:+UseGCLogFileRotation
-XX:+UseParNewGC
-XX:-OmitStackTraceInFastThrow
-XX:CMSInitiatingOccupancyFraction=50
-XX:CMSMaxAbortablePrecleanTime=6000
-XX:ConcGCThreads=4
-XX:GCLogFileSize=20M
-XX:MaxTenuringThreshold=8
-XX:NewRatio=3
-XX:NumberOfGCLogFiles=9
-XX:OnOutOfMemoryError=/home/solr/bin/oom_solr.sh 8980
/data/solr/server/logs
-XX:ParallelGCThreads=4

ISSUE: After running 8 to 9 hours the JVM heap memory keeps on increasing,
at that time if we did optimize then I am seeing 3 to 3.5 gb reduction, but
doing optimize at day time will be a problem, on the other side if the heap
got full then OOM exception is happening and the cloud crashes.

I read somewhere that G1GC will give better results, but SOLR experts
doesn't encourage to use it

what else we can do to resolve this issue?

Thanks,
Doss.

Reply via email to