On 12/22/2016 9:29 AM, Prateek Jain J wrote: > We are using solr 4.8.1 and getting OOM Error in one of the test > environments. Given below are the details:
There are exactly two ways to deal with OOM: 1) Increase the Java heap. 2) Make the program use less memory. Any other action (such as changing garbage collection parameters) will NOT fix problems with running out of memory. > 1. OS - Linux, 64 bit, 32 GB RAM > > 2. Solr - 4.8.1, 8 GB allocation as java heap. Installed as service. > Thread size (-Xss of 256K, -XX:+UseParallelOldGC). > > 3. Java - 1.7 update 95, 64-bit > > It is happening when one of the solr instance is trying to come up. It has > around 100GB of data, lying on some network storage like NFS. Now, the > interesting part is, eclipse MAT plugin shows that FieldCache has taken more > than 3.5GB from 8GB. This environment has setup for stress testing solr, so > even when a new solr instance is starting, load is on it. The size of each allocation that goes into the FieldCache will be determined mostly by the number of documents you have in your index. A large amount of memory can be allocated in the FieldCache if you use many fields for sorting and/or facets and don't have docValues enabled on those fields. If you have ten million documents in your index and don't use docValues, then each field you sort on will add ten million entries to the FieldCache. The same goes for facet fields when using the default facet.method setting. This page discusses ways you may be able to reduce Solr's heap requirements: https://wiki.apache.org/solr/SolrPerformanceProblems#Java_Heap Although the GC tuning you use will not affect OOM, I would strongly recommend *not* using the parallel collector. All the info below is about GC tuning, not OOM problems: https://wiki.apache.org/solr/ShawnHeisey Solr 5.x and 6.x have GC tuning built in, and the settings are very similar to the CMS settings you can find on my wiki page. Thanks, Shawn