Some more info,
after one week the servers have the following status:

Master (indexing only)
+ looks good and has heap size of about 6g from 10g OldGen
+ has loaded meanwhile 2 times the index from scratch via DIH
+ has added new documents into existing index via DIH
+ has optimized and replicated
+ no full GC within one week

Slave A (search only) Online
- looks bad and has heap size of 9.5g from 10g OldGen
+ was replicated
- several full GC

Slave B (search only) Backup
+ looks good has heap size of 4 g from 10g OldGen
+ was replicated
+ no full GC within one week

Conclusion:
+ DIH, processing, indexing, replication are fine
- the search is crap and "eats up" OldGen heap which can't be
  cleaned up by full GC. May be memory leaks or what ever...

Due to this Solr 3.1 can _NOT_ be recommended as high-availability,
high-search-load search engine because of unclear heap problems
caused by the search. The search is "out of the box", so no
self produced programming errors.

Any tools available for JAVA to analyze this?
(like valgrind or electric fence for C++)

Is it possible to analyze a heap dump produced with jvisualvm?
Which tools?


Bernd


Am 30.05.2011 15:51, schrieb Bernd Fehling:
Dear list,
after switching from FAST to Solr I get the first _real_ data.
This includes search times, memory consumption, perfomance of solr,...

What I recognized so far is that something eats up my OldGen and
I assume it might be replication.

Current Data:
one master - indexing only
two slaves - search only
over 28 million docs
single instance
single core
index size 140g
current heap size 16g

After startup I have about 4g heap in use and about 3.5g of OldGen.
After one week and some replications OldGen is filled close to 100 percent.
If I start an optimize under this condition I get OOM of heap.
So my assumption is that something is eating up my heap.

Any idea how to trace this down?

May be a memory leak somewhere?

Best regards
Bernd

Reply via email to