Hi Lance,

thanks for this hint. Something I also see, a sawtooth. This is
coming from Eden space together with Survivor 0 and 1.
I should switch to Java 7 release to get rid of this and see how
heap usage looks there. May be something else is also fixed.

Regards
Bernd


Am 19.09.2012 05:29, schrieb Lance Norskog:
> There is a known JVM garbage collection bug that causes this. It has to do 
> with reclaiming Weak references, I think in WeakHashMap. Concurrent garbage 
> collection collides with this bug and the result is that old field cache data 
> is retained after closing the index. The bug is more common with more 
> processors doing GC simultaneously.
> 
> The symptom is that when you run a monitor, the memory usage rises to a peak, 
> drops to a floor, rises again in the classic sawtooth pattern. When the GC 
> bug happens, the ceiling becomes the floor, and the sawtooth goes from the 
> new floor to a new ceiling. The two sizes are the same. So, 2G to 5G, over 
> and over, suddenly it is 5G to 8G, over and over.
> 
> The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find the 
> bug number. 
> 
> ----- Original Message -----
> | From: "Yonik Seeley" <yo...@lucidworks.com>
> | To: solr-user@lucene.apache.org
> | Sent: Tuesday, September 18, 2012 7:38:41 AM
> | Subject: Re: SOLR memory usage jump in JVM
> | 
> | On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
> | <bernd.fehl...@uni-bielefeld.de> wrote:
> | > I used GC in different situations and tried back and forth.
> | > Yes, it reduces the used heap memory, but not by 5GB.
> | > Even so that GC from jconsole (or jvisualvm) is "Full GC".
> | 
> | Whatever "Full GC" means ;-)
> | In the past at least, I've found that I had to hit "Full GC" from
> | jconsole many times in a row until heap usage stabilizes at it's
> | lowest point.
> | 
> | You could check fieldCache and fieldValueCache to see how many
> | entries
> | there are before and after the memory bump.
> | If that doesn't show anything different, I guess you may need to
> | resort to a heap dump before and after.
> | 
> | > But while you bring GC into this, there is another interesting
> | > thing.
> | > - I have one slave running for a week which ends up around 18 to
> | > 20GB of heap memory.
> | > - the slave goes offline for replication (no user queries on this
> | > slave)
> | > - the slave gets replicated and starts a new searcher
> | > - the heap memory of the slave is still around 11 to 12GB
> | > - then I initiate a Full GC from jconsole which brings it down to
> | > about 8GB
> | > - then I call optimize (on a optimized index) and it then drops to
> | > 6.5GB like a fresh started system
> | >
> | >
> | > I have already looked through Uwe's blog but he says "...As a rule
> | > of thumb: Don’t use more
> | > than 1/4 of your physical memory as heap space for Java running
> | > Lucene/Solr,..."
> | > That would be on my server 8GB for JVM heap, can't believe that the
> | > system
> | > will run for longer than 10 minutes with 8GB heap.
> | 
> | As you probably know, it depends hugely on the usecases/queries: some
> | configurations would be fine with a small amount of heap, other
> | configurations that facet and sort on tons of different fields would
> | not be.
> | 
> | 
> | -Yonik
> | http://lucidworks.com
> | 
> 

Reply via email to