Re: Experience with Solr and JVM heap sizes over 2 GB
On Wed, Mar 31, 2010 at 11:34 AM, Burton-West, Tom wrote: > Hello all, > > We have been running a configuration in production with 3 solr instances > under one tomcat with 16GB allocated to the JVM. (java -Xmx16384m > -Xms16384m) I just noticed the warning in the LucidWorks Certified > Distribution Reference Guide that warns against using more than 2GB (see > below). > Are other people using systems with over 2GB allocated to the JVM? Plenty of people. People always want specific numbers for the general case (how many documents, how large a heap, etc)... and those specific numbers are always wrong for a good percent of the population and their specific setups and needs :-) In general, you don't want your heap larger than it needs to be - this leaves more free RAM for the OS to cache important parts of the lucene index files. > What steps can we take to determine if performance is being adversely > affected by the large heap size? If your query response latencies are acceptable, I wouldn't worry about it. If they normally are, but sometimes aren't, then GC could be the issue. One way to investigate further is to use the -verbose:gc and -XX:-PrintGC* options: http://java.sun.com/javase/technologies/hotspot/vmoptions.jsp -Yonik http://www.lucidimagination.com
Re: Experience with Solr and JVM heap sizes over 2 GB
I have used up to 27GB of heap with no issues, both SOLR and (just) Lucene. -Glen Newton http://zzzoot.blogspot.com/ On 31 March 2010 11:34, Burton-West, Tom wrote: > Hello all, > > We have been running a configuration in production with 3 solr instances > under one tomcat with 16GB allocated to the JVM. (java -Xmx16384m > -Xms16384m) I just noticed the warning in the LucidWorks Certified > Distribution Reference Guide that warns against using more than 2GB (see > below). > Are other people using systems with over 2GB allocated to the JVM? > > What steps can we take to determine if performance is being adversely > affected by the large heap size? > > “The larger the heap the longer it takes to do garbage collection. This can > mean minor, random pauses or, in extreme cases, “freeze the world” pauses of > a minute or more. As a practical matter, this can become a serious problem > for heap sizes that exceed about two gigabytes, even if far more physical > memory is available.” > http://www.lucidimagination.com/search/document/CDRG_ch08_8.4.1?q=memory%20caching > > Tom Burton-West > -- > > 14.2-b01 > Java HotSpot(TM) 64-Bit Server VM > 16 > − > > 2.3 GB > 15.3 GB > 15.3 GB > 13.1 GB (%85.3) > > > > -- -
Experience with Solr and JVM heap sizes over 2 GB
Hello all, We have been running a configuration in production with 3 solr instances under one tomcat with 16GB allocated to the JVM. (java -Xmx16384m -Xms16384m) I just noticed the warning in the LucidWorks Certified Distribution Reference Guide that warns against using more than 2GB (see below). Are other people using systems with over 2GB allocated to the JVM? What steps can we take to determine if performance is being adversely affected by the large heap size? “The larger the heap the longer it takes to do garbage collection. This can mean minor, random pauses or, in extreme cases, “freeze the world” pauses of a minute or more. As a practical matter, this can become a serious problem for heap sizes that exceed about two gigabytes, even if far more physical memory is available.” http://www.lucidimagination.com/search/document/CDRG_ch08_8.4.1?q=memory%20caching Tom Burton-West -- 14.2-b01 Java HotSpot(TM) 64-Bit Server VM 16 − 2.3 GB 15.3 GB 15.3 GB 13.1 GB (%85.3)