On 9/11/2013 8:54 AM, Kuchekar wrote:
      We are using solr 4.4 on Linux with OpenJDK 64-Bit. We started the
Solr with 40GB but we noticed that the QTime is way high compared to
similar on 3.5 solr.
Both the 3.5 and 4.4 solr's configurations and schema are similarly
constructed. Also during the triage we found the physical memory to be
utilized at 95..%.

A 40GB heap is *huge*. Unless you are dealing with millions of super-large documents or many many millions of smaller documents, there should be no need for a heap that large. Additionally, if you are allocating most of your system memory to Java, then you will have little or no RAM available for OS disk caching, which will cause major performance issues.

For most indexes, memory usage should be less after an upgrade, but there are exceptions.

I see that you had an earlier question about stored field compression, and that you talked about exporting data from your 3.5 install to index into 4.4, in which you had stored every field, including copyFields.

If you have a lot of stored data, memory usage for decompression can become a problem. It's usually a lot better to store minimal information, just enough to display a result grid/list, and some ID information so that when someone clicks on an individual result, you can retrieve the entire record from another data source, like a database or a filesystem.

Here's a more exhaustive list of potential performance and memory problems with Solr:

http://wiki.apache.org/solr/SolrPerformanceProblems

OpenJDK may be problematic, especially if it's version 6. With Java 7, OpenJDK is actually the reference implementation, so if you are using OpenJDK 7, I would be less concerned. With either version, Oracle Java tends to produce better results.

Thanks,
Shawn

Reply via email to