Re: SOLR memory usage jump in JVM

2012-09-20 Thread Bernd Fehling
That is the problem with a jvm, it is a virtual machine.
Ask 10 experts about a good jvm settings and you get 15 answers. May be a 
tradeoff
of the flexibility of jvm's. There is always a right setting for any application
running on a jvm but you just have to find it.
How about a Solr Wiki page about JVM settings for Solr?
The good, the bad and the ugly?
With a very short describtion why to set it (or not) and what it will affect?


By the way while looking for upgrading to JDK7, the release notes say under 
section
known issues about the PorterStemmer bug:
...The recommended workaround is to specify -XX:-UseLoopPredicate on the 
command line.
Is this still not fixed, or won't fix?
So this could be a candidate for an entry about JVM settings on the wiki page.

Regards
Bernd



Am 19.09.2012 18:14, schrieb Rozdev29:
 I have used this setting to reduce gc pauses with CMS - java 6 u23
 
 XX:+ParallelRefProcEnabled
 
 With this setting, jvm does gc of weakrefs with multiple threads and pauses 
 are low.
 
 Please use this option only when you have multiple cores.
 
 For me, CMS gives better results
 
 Sent from my iPhone
 
 On Sep 19, 2012, at 8:50 AM, Walter Underwood wun...@wunderwood.org wrote:
 
 Ooh, that is a nasty one. Is this JDK 7 only or also in 6?

 It looks like the -XX:ConcGCThreads=1 option is a workaround, is that 
 right?

 We've had some 1.6 JVMs behave in the same way that bug describes, but I 
 haven't verified it is because of finalizer problems.

 wunder

 On Sep 19, 2012, at 5:43 AM, Erick Erickson wrote:

 Two in one morning

 The JVM bug I'm familiar with is here:
 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7112034

 FWIW,
 Erick

 On Wed, Sep 19, 2012 at 8:20 AM, Shawn Heisey s...@elyograg.org wrote:
 On 9/18/2012 9:29 PM, Lance Norskog wrote:

 There is a known JVM garbage collection bug that causes this. It has to do
 with reclaiming Weak references, I think in WeakHashMap. Concurrent 
 garbage
 collection collides with this bug and the result is that old field cache
 data is retained after closing the index. The bug is more common with more
 processors doing GC simultaneously.

 The symptom is that when you run a monitor, the memory usage rises to a
 peak, drops to a floor, rises again in the classic sawtooth pattern. When
 the GC bug happens, the ceiling becomes the floor, and the sawtooth goes
 from the new floor to a new ceiling. The two sizes are the same. So, 2G to
 5G, over and over, suddenly it is 5G to 8G, over and over.

 The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find
 the bug number.


 I think I ran into this when I was looking at memory usage on my SolrJ
 indexing program.  Under Java6, memory usage in jconsole (remotely via JMX)
 was fairly constant long-term (aside from the unavoidable sawtooth).  When 
 I
 ran it under Java 7u3, it would continually grow, slowly ... but if I
 measured it with jstat on the Linux commandline rather than remotely via
 jconsole under windows, memory usage was consistent over time, just like
 under java6 with the remote jconsole.  After looking at heap dumps and
 scratching my head a lot, I finally concluded that I did not have a memory
 leak, there was a problem with remote JMX monitoring in java7.  Glad to 
 hear
 I was not imagining it, and that it's fixed now.

 Thanks,
 Shawn


 --
 Walter Underwood
 wun...@wunderwood.org



 



Re: SOLR memory usage jump in JVM

2012-09-20 Thread Robert Muir
On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:

 By the way while looking for upgrading to JDK7, the release notes say under 
 section
 known issues about the PorterStemmer bug:
 ...The recommended workaround is to specify -XX:-UseLoopPredicate on the 
 command line.
 Is this still not fixed, or won't fix?

How in the world can we fix it?

Oracle released a broken java version: there's nothing we can do about
that. Go take it up with them.

-- 
lucidworks.com


Re: SOLR memory usage jump in JVM

2012-09-20 Thread Erick Erickson
Here's a wonderful writeup about GC and memory in Solr/Lucene:

http://searchhub.org/dev/2011/03/27/garbage-collection-bootcamp-1-0/

Best
Erick

On Thu, Sep 20, 2012 at 5:49 AM, Robert Muir rcm...@gmail.com wrote:
 On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:

 By the way while looking for upgrading to JDK7, the release notes say under 
 section
 known issues about the PorterStemmer bug:
 ...The recommended workaround is to specify -XX:-UseLoopPredicate on the 
 command line.
 Is this still not fixed, or won't fix?

 How in the world can we fix it?

 Oracle released a broken java version: there's nothing we can do about
 that. Go take it up with them.

 --
 lucidworks.com


Re: SOLR memory usage jump in JVM

2012-09-20 Thread Bernd Fehling
Hi Erik,

thanks for the link.
Now if we could see the images in that article that would be great :-)


By the way, one cause for the memory jumps was located as killer search from 
a user.
The interesting part is that the verbose gc.log showed a hiccup in the GC.
Which means that during a GC run right after CMS-concurrent-sweep-start but 
before
CMS-concurrent-sweep there is a new GC launched which interferes with the 
running one.
Any switches for this to serialize GC?


Regards
Bernd


Am 20.09.2012 13:51, schrieb Erick Erickson:
 Here's a wonderful writeup about GC and memory in Solr/Lucene:
 
 http://searchhub.org/dev/2011/03/27/garbage-collection-bootcamp-1-0/
 
 Best
 Erick
 
 On Thu, Sep 20, 2012 at 5:49 AM, Robert Muir rcm...@gmail.com wrote:
 On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:

 By the way while looking for upgrading to JDK7, the release notes say under 
 section
 known issues about the PorterStemmer bug:
 ...The recommended workaround is to specify -XX:-UseLoopPredicate on the 
 command line.
 Is this still not fixed, or won't fix?

 How in the world can we fix it?

 Oracle released a broken java version: there's nothing we can do about
 that. Go take it up with them.

 --
 lucidworks.com


Re: SOLR memory usage jump in JVM

2012-09-20 Thread Erick Erickson
Yeah, I sent a note to the web folks there about the images.

I'll leave the rest to people who really _understand_ all that stuff

On Thu, Sep 20, 2012 at 8:31 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
 Hi Erik,

 thanks for the link.
 Now if we could see the images in that article that would be great :-)


 By the way, one cause for the memory jumps was located as killer search 
 from a user.
 The interesting part is that the verbose gc.log showed a hiccup in the GC.
 Which means that during a GC run right after CMS-concurrent-sweep-start but 
 before
 CMS-concurrent-sweep there is a new GC launched which interferes with the 
 running one.
 Any switches for this to serialize GC?


 Regards
 Bernd


 Am 20.09.2012 13:51, schrieb Erick Erickson:
 Here's a wonderful writeup about GC and memory in Solr/Lucene:

 http://searchhub.org/dev/2011/03/27/garbage-collection-bootcamp-1-0/

 Best
 Erick

 On Thu, Sep 20, 2012 at 5:49 AM, Robert Muir rcm...@gmail.com wrote:
 On Thu, Sep 20, 2012 at 3:09 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:

 By the way while looking for upgrading to JDK7, the release notes say 
 under section
 known issues about the PorterStemmer bug:
 ...The recommended workaround is to specify -XX:-UseLoopPredicate on the 
 command line.
 Is this still not fixed, or won't fix?

 How in the world can we fix it?

 Oracle released a broken java version: there's nothing we can do about
 that. Go take it up with them.

 --
 lucidworks.com


Re: SOLR memory usage jump in JVM

2012-09-19 Thread Bernd Fehling
Hi Lance,

thanks for this hint. Something I also see, a sawtooth. This is
coming from Eden space together with Survivor 0 and 1.
I should switch to Java 7 release to get rid of this and see how
heap usage looks there. May be something else is also fixed.

Regards
Bernd


Am 19.09.2012 05:29, schrieb Lance Norskog:
 There is a known JVM garbage collection bug that causes this. It has to do 
 with reclaiming Weak references, I think in WeakHashMap. Concurrent garbage 
 collection collides with this bug and the result is that old field cache data 
 is retained after closing the index. The bug is more common with more 
 processors doing GC simultaneously.
 
 The symptom is that when you run a monitor, the memory usage rises to a peak, 
 drops to a floor, rises again in the classic sawtooth pattern. When the GC 
 bug happens, the ceiling becomes the floor, and the sawtooth goes from the 
 new floor to a new ceiling. The two sizes are the same. So, 2G to 5G, over 
 and over, suddenly it is 5G to 8G, over and over.
 
 The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find the 
 bug number. 
 
 - Original Message -
 | From: Yonik Seeley yo...@lucidworks.com
 | To: solr-user@lucene.apache.org
 | Sent: Tuesday, September 18, 2012 7:38:41 AM
 | Subject: Re: SOLR memory usage jump in JVM
 | 
 | On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
 | bernd.fehl...@uni-bielefeld.de wrote:
 |  I used GC in different situations and tried back and forth.
 |  Yes, it reduces the used heap memory, but not by 5GB.
 |  Even so that GC from jconsole (or jvisualvm) is Full GC.
 | 
 | Whatever Full GC means ;-)
 | In the past at least, I've found that I had to hit Full GC from
 | jconsole many times in a row until heap usage stabilizes at it's
 | lowest point.
 | 
 | You could check fieldCache and fieldValueCache to see how many
 | entries
 | there are before and after the memory bump.
 | If that doesn't show anything different, I guess you may need to
 | resort to a heap dump before and after.
 | 
 |  But while you bring GC into this, there is another interesting
 |  thing.
 |  - I have one slave running for a week which ends up around 18 to
 |  20GB of heap memory.
 |  - the slave goes offline for replication (no user queries on this
 |  slave)
 |  - the slave gets replicated and starts a new searcher
 |  - the heap memory of the slave is still around 11 to 12GB
 |  - then I initiate a Full GC from jconsole which brings it down to
 |  about 8GB
 |  - then I call optimize (on a optimized index) and it then drops to
 |  6.5GB like a fresh started system
 | 
 | 
 |  I have already looked through Uwe's blog but he says ...As a rule
 |  of thumb: Don’t use more
 |  than 1/4 of your physical memory as heap space for Java running
 |  Lucene/Solr,...
 |  That would be on my server 8GB for JVM heap, can't believe that the
 |  system
 |  will run for longer than 10 minutes with 8GB heap.
 | 
 | As you probably know, it depends hugely on the usecases/queries: some
 | configurations would be fine with a small amount of heap, other
 | configurations that facet and sort on tons of different fields would
 | not be.
 | 
 | 
 | -Yonik
 | http://lucidworks.com
 | 
 


Re: SOLR memory usage jump in JVM

2012-09-19 Thread Bernd Fehling
Hi Otis,

because I see this on my slave without replication there is no index file 
change.
I have also tons of logged data to dig in :-)

I took dumps from different stages, fresh installed, after 5GB jump, after the
system was hanging right after replication,...
The last one was interesting when the system was stuck right after replication.
From jvisualvm you click on MBeans and the MBeans Browser shows the top MBeans.
You click on /solr and nothing happens. So even the MBeans Browser gets no 
access.
Pulling a dump and turning it inside out showed that there are still file 
handles
to old index files left during this situation. But the files on disk are already
deleted. Doing FullGC and optimize several times didn't help.
Only solution to stop and start Solr and JVM.

Regards
Bernd


Am 19.09.2012 06:22, schrieb Otis Gospodnetic:
 Hi Bernd,
 
 On Tue, Sep 18, 2012 at 3:09 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:
 Hi Otis,

 not really a problem because I have plenty of memory ;-)
 -Xmx25g -Xms25g -Xmn6g
 
 Good.
 
 I'm just interested into this.
 Can you report similar jumps within JVM with your monitoring at sematext?
 
 Yes. More importantly, SPM will show you a bunch of other Solr and
 system metrics, so you can correlate them to your JVM heap jumps.  For
 example, you may see the number of index file change at that time.  Or
 higher request rate.  Or cache size growth.  Or ...
 
 Actually I would assume to see jumps of 0.5GB or even 1GB, but 5GB?
 And what is the cause, a cache?
 
 Might be.  Please see above.  Of course, you could also try running a
 profiler and analyzing the heap dump though you may need a lot of
 RAM on your workstation for doing that. :)
 
 And is there another option in JVM to give memory jumps a size?
 
 Doesn't -gc:verbose show heap jumps?
 
 Otis
 Search Analytics - http://sematext.com/search-analytics/index.html
 Performance Monitoring - http://sematext.com/spm/index.html
 
 
 
 Am 18.09.2012 08:58, schrieb Otis Gospodnetic:
 Hi Bernd,

 But is this really (causing) a problem?  What -Xmx are you using?

 Otis
 Search Analytics - http://sematext.com/search-analytics/index.html
 Performance Monitoring - http://sematext.com/spm/index.html


 On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:
 Hi list,

 while monitoring my systems I see a jump in memory consumption in JVM
 after 2 to 5 days of running of about 5GB.

 After starting the system (search node only, no replication during search)
 SOLR uses between 6.5GB to 10.3GB of JVM when idle.
 If the search node is online and serves requests it uses between 7GB to 
 11.3GB.
 But after 2 to 5 days of running I see a jump in JVM with memory 
 consumption
 of about 5GB. The JVM uses then between 13GB and 18GB.

 Anyone else seen this also?

 I analyzed the logs but no exceptions, no special queries, no long QTime.
 Also the GC log has nothing unusual at the first sight.

 Why is the JVM doing a jump of 5GB, which part of SOLR can cause such a 
 jump in JVM?

 I would accept a slowly growing of memory consumption, but a jump? of 
 about 5GB?

 Regards
 Bernd

-- 
*
Bernd FehlingUniversitätsbibliothek Bielefeld
Dipl.-Inform. (FH)LibTec - Bibliothekstechnologie
Universitätsstr. 25 und Wissensmanagement
33615 Bielefeld
Tel. +49 521 106-4060   bernd.fehling(at)uni-bielefeld.de

BASE - Bielefeld Academic Search Engine - www.base-search.net
*


Re: SOLR memory usage jump in JVM

2012-09-19 Thread Lance Norskog
The Sawtooth curve is normal. It means that memory use slowly goes up, this 
triggers a garbage collection pass, which frees the memory very quickly.

You can also turn off parallel garbage collection. This is slower, but will not 
trigger the SUN bug. (If that really is the problem.)

- Original Message -
| From: Bernd Fehling bernd.fehl...@uni-bielefeld.de
| To: solr-user@lucene.apache.org
| Sent: Tuesday, September 18, 2012 11:29:56 PM
| Subject: Re: SOLR memory usage jump in JVM
| 
| Hi Lance,
| 
| thanks for this hint. Something I also see, a sawtooth. This is
| coming from Eden space together with Survivor 0 and 1.
| I should switch to Java 7 release to get rid of this and see how
| heap usage looks there. May be something else is also fixed.
| 
| Regards
| Bernd
| 
| 
| Am 19.09.2012 05:29, schrieb Lance Norskog:
|  There is a known JVM garbage collection bug that causes this. It
|  has to do with reclaiming Weak references, I think in WeakHashMap.
|  Concurrent garbage collection collides with this bug and the
|  result is that old field cache data is retained after closing the
|  index. The bug is more common with more processors doing GC
|  simultaneously.
|  
|  The symptom is that when you run a monitor, the memory usage rises
|  to a peak, drops to a floor, rises again in the classic sawtooth
|  pattern. When the GC bug happens, the ceiling becomes the floor,
|  and the sawtooth goes from the new floor to a new ceiling. The two
|  sizes are the same. So, 2G to 5G, over and over, suddenly it is 5G
|  to 8G, over and over.
|  
|  The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot
|  find the bug number.
|  
|  - Original Message -
|  | From: Yonik Seeley yo...@lucidworks.com
|  | To: solr-user@lucene.apache.org
|  | Sent: Tuesday, September 18, 2012 7:38:41 AM
|  | Subject: Re: SOLR memory usage jump in JVM
|  | 
|  | On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
|  | bernd.fehl...@uni-bielefeld.de wrote:
|  |  I used GC in different situations and tried back and forth.
|  |  Yes, it reduces the used heap memory, but not by 5GB.
|  |  Even so that GC from jconsole (or jvisualvm) is Full GC.
|  | 
|  | Whatever Full GC means ;-)
|  | In the past at least, I've found that I had to hit Full GC from
|  | jconsole many times in a row until heap usage stabilizes at it's
|  | lowest point.
|  | 
|  | You could check fieldCache and fieldValueCache to see how many
|  | entries
|  | there are before and after the memory bump.
|  | If that doesn't show anything different, I guess you may need to
|  | resort to a heap dump before and after.
|  | 
|  |  But while you bring GC into this, there is another interesting
|  |  thing.
|  |  - I have one slave running for a week which ends up around 18
|  |  to
|  |  20GB of heap memory.
|  |  - the slave goes offline for replication (no user queries on
|  |  this
|  |  slave)
|  |  - the slave gets replicated and starts a new searcher
|  |  - the heap memory of the slave is still around 11 to 12GB
|  |  - then I initiate a Full GC from jconsole which brings it down
|  |  to
|  |  about 8GB
|  |  - then I call optimize (on a optimized index) and it then drops
|  |  to
|  |  6.5GB like a fresh started system
|  | 
|  | 
|  |  I have already looked through Uwe's blog but he says ...As a
|  |  rule
|  |  of thumb: Don’t use more
|  |  than 1/4 of your physical memory as heap space for Java running
|  |  Lucene/Solr,...
|  |  That would be on my server 8GB for JVM heap, can't believe that
|  |  the
|  |  system
|  |  will run for longer than 10 minutes with 8GB heap.
|  | 
|  | As you probably know, it depends hugely on the usecases/queries:
|  | some
|  | configurations would be fine with a small amount of heap, other
|  | configurations that facet and sort on tons of different fields
|  | would
|  | not be.
|  | 
|  | 
|  | -Yonik
|  | http://lucidworks.com
|  | 
|  
| 


Re: SOLR memory usage jump in JVM

2012-09-19 Thread Shawn Heisey

On 9/18/2012 9:29 PM, Lance Norskog wrote:

There is a known JVM garbage collection bug that causes this. It has to do with 
reclaiming Weak references, I think in WeakHashMap. Concurrent garbage 
collection collides with this bug and the result is that old field cache data 
is retained after closing the index. The bug is more common with more 
processors doing GC simultaneously.

The symptom is that when you run a monitor, the memory usage rises to a peak, 
drops to a floor, rises again in the classic sawtooth pattern. When the GC bug 
happens, the ceiling becomes the floor, and the sawtooth goes from the new 
floor to a new ceiling. The two sizes are the same. So, 2G to 5G, over and 
over, suddenly it is 5G to 8G, over and over.

The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find the 
bug number.


I think I ran into this when I was looking at memory usage on my SolrJ 
indexing program.  Under Java6, memory usage in jconsole (remotely via 
JMX) was fairly constant long-term (aside from the unavoidable 
sawtooth).  When I ran it under Java 7u3, it would continually grow, 
slowly ... but if I measured it with jstat on the Linux commandline 
rather than remotely via jconsole under windows, memory usage was 
consistent over time, just like under java6 with the remote jconsole.  
After looking at heap dumps and scratching my head a lot, I finally 
concluded that I did not have a memory leak, there was a problem with 
remote JMX monitoring in java7.  Glad to hear I was not imagining it, 
and that it's fixed now.


Thanks,
Shawn



Re: SOLR memory usage jump in JVM

2012-09-19 Thread Erick Erickson
Two in one morning

The JVM bug I'm familiar with is here:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7112034

FWIW,
Erick

On Wed, Sep 19, 2012 at 8:20 AM, Shawn Heisey s...@elyograg.org wrote:
 On 9/18/2012 9:29 PM, Lance Norskog wrote:

 There is a known JVM garbage collection bug that causes this. It has to do
 with reclaiming Weak references, I think in WeakHashMap. Concurrent garbage
 collection collides with this bug and the result is that old field cache
 data is retained after closing the index. The bug is more common with more
 processors doing GC simultaneously.

 The symptom is that when you run a monitor, the memory usage rises to a
 peak, drops to a floor, rises again in the classic sawtooth pattern. When
 the GC bug happens, the ceiling becomes the floor, and the sawtooth goes
 from the new floor to a new ceiling. The two sizes are the same. So, 2G to
 5G, over and over, suddenly it is 5G to 8G, over and over.

 The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find
 the bug number.


 I think I ran into this when I was looking at memory usage on my SolrJ
 indexing program.  Under Java6, memory usage in jconsole (remotely via JMX)
 was fairly constant long-term (aside from the unavoidable sawtooth).  When I
 ran it under Java 7u3, it would continually grow, slowly ... but if I
 measured it with jstat on the Linux commandline rather than remotely via
 jconsole under windows, memory usage was consistent over time, just like
 under java6 with the remote jconsole.  After looking at heap dumps and
 scratching my head a lot, I finally concluded that I did not have a memory
 leak, there was a problem with remote JMX monitoring in java7.  Glad to hear
 I was not imagining it, and that it's fixed now.

 Thanks,
 Shawn



Re: SOLR memory usage jump in JVM

2012-09-19 Thread Walter Underwood
Ooh, that is a nasty one. Is this JDK 7 only or also in 6?

It looks like the -XX:ConcGCThreads=1 option is a workaround, is that right?

We've had some 1.6 JVMs behave in the same way that bug describes, but I 
haven't verified it is because of finalizer problems.

wunder

On Sep 19, 2012, at 5:43 AM, Erick Erickson wrote:

 Two in one morning
 
 The JVM bug I'm familiar with is here:
 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7112034
 
 FWIW,
 Erick
 
 On Wed, Sep 19, 2012 at 8:20 AM, Shawn Heisey s...@elyograg.org wrote:
 On 9/18/2012 9:29 PM, Lance Norskog wrote:
 
 There is a known JVM garbage collection bug that causes this. It has to do
 with reclaiming Weak references, I think in WeakHashMap. Concurrent garbage
 collection collides with this bug and the result is that old field cache
 data is retained after closing the index. The bug is more common with more
 processors doing GC simultaneously.
 
 The symptom is that when you run a monitor, the memory usage rises to a
 peak, drops to a floor, rises again in the classic sawtooth pattern. When
 the GC bug happens, the ceiling becomes the floor, and the sawtooth goes
 from the new floor to a new ceiling. The two sizes are the same. So, 2G to
 5G, over and over, suddenly it is 5G to 8G, over and over.
 
 The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find
 the bug number.
 
 
 I think I ran into this when I was looking at memory usage on my SolrJ
 indexing program.  Under Java6, memory usage in jconsole (remotely via JMX)
 was fairly constant long-term (aside from the unavoidable sawtooth).  When I
 ran it under Java 7u3, it would continually grow, slowly ... but if I
 measured it with jstat on the Linux commandline rather than remotely via
 jconsole under windows, memory usage was consistent over time, just like
 under java6 with the remote jconsole.  After looking at heap dumps and
 scratching my head a lot, I finally concluded that I did not have a memory
 leak, there was a problem with remote JMX monitoring in java7.  Glad to hear
 I was not imagining it, and that it's fixed now.
 
 Thanks,
 Shawn
 

--
Walter Underwood
wun...@wunderwood.org





Re: SOLR memory usage jump in JVM

2012-09-19 Thread Rozdev29
I have used this setting to reduce gc pauses with CMS - java 6 u23

XX:+ParallelRefProcEnabled

With this setting, jvm does gc of weakrefs with multiple threads and pauses are 
low.

Please use this option only when you have multiple cores.

For me, CMS gives better results

Sent from my iPhone

On Sep 19, 2012, at 8:50 AM, Walter Underwood wun...@wunderwood.org wrote:

 Ooh, that is a nasty one. Is this JDK 7 only or also in 6?
 
 It looks like the -XX:ConcGCThreads=1 option is a workaround, is that right?
 
 We've had some 1.6 JVMs behave in the same way that bug describes, but I 
 haven't verified it is because of finalizer problems.
 
 wunder
 
 On Sep 19, 2012, at 5:43 AM, Erick Erickson wrote:
 
 Two in one morning
 
 The JVM bug I'm familiar with is here:
 http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=7112034
 
 FWIW,
 Erick
 
 On Wed, Sep 19, 2012 at 8:20 AM, Shawn Heisey s...@elyograg.org wrote:
 On 9/18/2012 9:29 PM, Lance Norskog wrote:
 
 There is a known JVM garbage collection bug that causes this. It has to do
 with reclaiming Weak references, I think in WeakHashMap. Concurrent garbage
 collection collides with this bug and the result is that old field cache
 data is retained after closing the index. The bug is more common with more
 processors doing GC simultaneously.
 
 The symptom is that when you run a monitor, the memory usage rises to a
 peak, drops to a floor, rises again in the classic sawtooth pattern. When
 the GC bug happens, the ceiling becomes the floor, and the sawtooth goes
 from the new floor to a new ceiling. The two sizes are the same. So, 2G to
 5G, over and over, suddenly it is 5G to 8G, over and over.
 
 The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find
 the bug number.
 
 
 I think I ran into this when I was looking at memory usage on my SolrJ
 indexing program.  Under Java6, memory usage in jconsole (remotely via JMX)
 was fairly constant long-term (aside from the unavoidable sawtooth).  When I
 ran it under Java 7u3, it would continually grow, slowly ... but if I
 measured it with jstat on the Linux commandline rather than remotely via
 jconsole under windows, memory usage was consistent over time, just like
 under java6 with the remote jconsole.  After looking at heap dumps and
 scratching my head a lot, I finally concluded that I did not have a memory
 leak, there was a problem with remote JMX monitoring in java7.  Glad to hear
 I was not imagining it, and that it's fixed now.
 
 Thanks,
 Shawn
 
 
 --
 Walter Underwood
 wun...@wunderwood.org
 
 
 


Re: SOLR memory usage jump in JVM

2012-09-18 Thread Otis Gospodnetic
Hi Bernd,

But is this really (causing) a problem?  What -Xmx are you using?

Otis
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html


On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
 Hi list,

 while monitoring my systems I see a jump in memory consumption in JVM
 after 2 to 5 days of running of about 5GB.

 After starting the system (search node only, no replication during search)
 SOLR uses between 6.5GB to 10.3GB of JVM when idle.
 If the search node is online and serves requests it uses between 7GB to 
 11.3GB.
 But after 2 to 5 days of running I see a jump in JVM with memory consumption
 of about 5GB. The JVM uses then between 13GB and 18GB.

 Anyone else seen this also?

 I analyzed the logs but no exceptions, no special queries, no long QTime.
 Also the GC log has nothing unusual at the first sight.

 Why is the JVM doing a jump of 5GB, which part of SOLR can cause such a jump 
 in JVM?

 I would accept a slowly growing of memory consumption, but a jump? of about 
 5GB?

 Regards
 Bernd


Re: SOLR memory usage jump in JVM

2012-09-18 Thread Bernd Fehling
Hi Otis,

not really a problem because I have plenty of memory ;-)
-Xmx25g -Xms25g -Xmn6g

I'm just interested into this.
Can you report similar jumps within JVM with your monitoring at sematext?

Actually I would assume to see jumps of 0.5GB or even 1GB, but 5GB?
And what is the cause, a cache?

And is there another option in JVM to give memory jumps a size?

Regards
Bernd

Am 18.09.2012 08:58, schrieb Otis Gospodnetic:
 Hi Bernd,
 
 But is this really (causing) a problem?  What -Xmx are you using?
 
 Otis
 Search Analytics - http://sematext.com/search-analytics/index.html
 Performance Monitoring - http://sematext.com/spm/index.html
 
 
 On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:
 Hi list,

 while monitoring my systems I see a jump in memory consumption in JVM
 after 2 to 5 days of running of about 5GB.

 After starting the system (search node only, no replication during search)
 SOLR uses between 6.5GB to 10.3GB of JVM when idle.
 If the search node is online and serves requests it uses between 7GB to 
 11.3GB.
 But after 2 to 5 days of running I see a jump in JVM with memory consumption
 of about 5GB. The JVM uses then between 13GB and 18GB.

 Anyone else seen this also?

 I analyzed the logs but no exceptions, no special queries, no long QTime.
 Also the GC log has nothing unusual at the first sight.

 Why is the JVM doing a jump of 5GB, which part of SOLR can cause such a jump 
 in JVM?

 I would accept a slowly growing of memory consumption, but a jump? of about 
 5GB?

 Regards
 Bernd


Re: SOLR memory usage jump in JVM

2012-09-18 Thread Erick Erickson
What happens if you attach jconsole (should ship with your SDK) and force a GC?
Does the extra 5G go away?

I'm wondering if you get a couple of warming searchers going simultaneously
and happened to measure after that.

Uwe has an interesting blog about memory, he recommends using as
little as possible,
see:
http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html

Best
Erick

On Tue, Sep 18, 2012 at 3:09 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
 Hi Otis,

 not really a problem because I have plenty of memory ;-)
 -Xmx25g -Xms25g -Xmn6g

 I'm just interested into this.
 Can you report similar jumps within JVM with your monitoring at sematext?

 Actually I would assume to see jumps of 0.5GB or even 1GB, but 5GB?
 And what is the cause, a cache?

 And is there another option in JVM to give memory jumps a size?

 Regards
 Bernd

 Am 18.09.2012 08:58, schrieb Otis Gospodnetic:
 Hi Bernd,

 But is this really (causing) a problem?  What -Xmx are you using?

 Otis
 Search Analytics - http://sematext.com/search-analytics/index.html
 Performance Monitoring - http://sematext.com/spm/index.html


 On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:
 Hi list,

 while monitoring my systems I see a jump in memory consumption in JVM
 after 2 to 5 days of running of about 5GB.

 After starting the system (search node only, no replication during search)
 SOLR uses between 6.5GB to 10.3GB of JVM when idle.
 If the search node is online and serves requests it uses between 7GB to 
 11.3GB.
 But after 2 to 5 days of running I see a jump in JVM with memory consumption
 of about 5GB. The JVM uses then between 13GB and 18GB.

 Anyone else seen this also?

 I analyzed the logs but no exceptions, no special queries, no long QTime.
 Also the GC log has nothing unusual at the first sight.

 Why is the JVM doing a jump of 5GB, which part of SOLR can cause such a 
 jump in JVM?

 I would accept a slowly growing of memory consumption, but a jump? of about 
 5GB?

 Regards
 Bernd


Re: SOLR memory usage jump in JVM

2012-09-18 Thread Bernd Fehling
I used GC in different situations and tried back and forth.
Yes, it reduces the used heap memory, but not by 5GB.
Even so that GC from jconsole (or jvisualvm) is Full GC.

But while you bring GC into this, there is another interesting thing.
- I have one slave running for a week which ends up around 18 to 20GB of heap 
memory.
- the slave goes offline for replication (no user queries on this slave)
- the slave gets replicated and starts a new searcher
- the heap memory of the slave is still around 11 to 12GB
- then I initiate a Full GC from jconsole which brings it down to about 8GB
- then I call optimize (on a optimized index) and it then drops to 6.5GB like a 
fresh started system


I have already looked through Uwe's blog but he says ...As a rule of thumb: 
Don’t use more
than 1/4 of your physical memory as heap space for Java running Lucene/Solr,...
That would be on my server 8GB for JVM heap, can't believe that the system
will run for longer than 10 minutes with 8GB heap.


Next tests will be with all autowarm off. This is for the FullGC/optimize 
problem after replication.
But for the 5GB jump I have no idea. May be changing the cache sizes?


Regards
Bernd


Am 18.09.2012 13:06, schrieb Erick Erickson:
 What happens if you attach jconsole (should ship with your SDK) and force a 
 GC?
 Does the extra 5G go away?
 
 I'm wondering if you get a couple of warming searchers going simultaneously
 and happened to measure after that.
 
 Uwe has an interesting blog about memory, he recommends using as
 little as possible,
 see:
 http://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html
 
 Best
 Erick
 
 On Tue, Sep 18, 2012 at 3:09 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:
 Hi Otis,

 not really a problem because I have plenty of memory ;-)
 -Xmx25g -Xms25g -Xmn6g

 I'm just interested into this.
 Can you report similar jumps within JVM with your monitoring at sematext?

 Actually I would assume to see jumps of 0.5GB or even 1GB, but 5GB?
 And what is the cause, a cache?

 And is there another option in JVM to give memory jumps a size?

 Regards
 Bernd

 Am 18.09.2012 08:58, schrieb Otis Gospodnetic:
 Hi Bernd,

 But is this really (causing) a problem?  What -Xmx are you using?

 Otis
 Search Analytics - http://sematext.com/search-analytics/index.html
 Performance Monitoring - http://sematext.com/spm/index.html


 On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:
 Hi list,

 while monitoring my systems I see a jump in memory consumption in JVM
 after 2 to 5 days of running of about 5GB.

 After starting the system (search node only, no replication during search)
 SOLR uses between 6.5GB to 10.3GB of JVM when idle.
 If the search node is online and serves requests it uses between 7GB to 
 11.3GB.
 But after 2 to 5 days of running I see a jump in JVM with memory 
 consumption
 of about 5GB. The JVM uses then between 13GB and 18GB.

 Anyone else seen this also?

 I analyzed the logs but no exceptions, no special queries, no long QTime.
 Also the GC log has nothing unusual at the first sight.

 Why is the JVM doing a jump of 5GB, which part of SOLR can cause such a 
 jump in JVM?

 I would accept a slowly growing of memory consumption, but a jump? of 
 about 5GB?

 Regards
 Bernd


Re: SOLR memory usage jump in JVM

2012-09-18 Thread Yonik Seeley
On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
 I used GC in different situations and tried back and forth.
 Yes, it reduces the used heap memory, but not by 5GB.
 Even so that GC from jconsole (or jvisualvm) is Full GC.

Whatever Full GC means ;-)
In the past at least, I've found that I had to hit Full GC from
jconsole many times in a row until heap usage stabilizes at it's
lowest point.

You could check fieldCache and fieldValueCache to see how many entries
there are before and after the memory bump.
If that doesn't show anything different, I guess you may need to
resort to a heap dump before and after.

 But while you bring GC into this, there is another interesting thing.
 - I have one slave running for a week which ends up around 18 to 20GB of heap 
 memory.
 - the slave goes offline for replication (no user queries on this slave)
 - the slave gets replicated and starts a new searcher
 - the heap memory of the slave is still around 11 to 12GB
 - then I initiate a Full GC from jconsole which brings it down to about 8GB
 - then I call optimize (on a optimized index) and it then drops to 6.5GB like 
 a fresh started system


 I have already looked through Uwe's blog but he says ...As a rule of thumb: 
 Don’t use more
 than 1/4 of your physical memory as heap space for Java running 
 Lucene/Solr,...
 That would be on my server 8GB for JVM heap, can't believe that the system
 will run for longer than 10 minutes with 8GB heap.

As you probably know, it depends hugely on the usecases/queries: some
configurations would be fine with a small amount of heap, other
configurations that facet and sort on tons of different fields would
not be.


-Yonik
http://lucidworks.com


Re: SOLR memory usage jump in JVM

2012-09-18 Thread Lance Norskog
There is a known JVM garbage collection bug that causes this. It has to do with 
reclaiming Weak references, I think in WeakHashMap. Concurrent garbage 
collection collides with this bug and the result is that old field cache data 
is retained after closing the index. The bug is more common with more 
processors doing GC simultaneously.

The symptom is that when you run a monitor, the memory usage rises to a peak, 
drops to a floor, rises again in the classic sawtooth pattern. When the GC bug 
happens, the ceiling becomes the floor, and the sawtooth goes from the new 
floor to a new ceiling. The two sizes are the same. So, 2G to 5G, over and 
over, suddenly it is 5G to 8G, over and over.

The bug is fixed in recent Java 7 releases. I'm sorry, but I cannot find the 
bug number. 

- Original Message -
| From: Yonik Seeley yo...@lucidworks.com
| To: solr-user@lucene.apache.org
| Sent: Tuesday, September 18, 2012 7:38:41 AM
| Subject: Re: SOLR memory usage jump in JVM
| 
| On Tue, Sep 18, 2012 at 7:45 AM, Bernd Fehling
| bernd.fehl...@uni-bielefeld.de wrote:
|  I used GC in different situations and tried back and forth.
|  Yes, it reduces the used heap memory, but not by 5GB.
|  Even so that GC from jconsole (or jvisualvm) is Full GC.
| 
| Whatever Full GC means ;-)
| In the past at least, I've found that I had to hit Full GC from
| jconsole many times in a row until heap usage stabilizes at it's
| lowest point.
| 
| You could check fieldCache and fieldValueCache to see how many
| entries
| there are before and after the memory bump.
| If that doesn't show anything different, I guess you may need to
| resort to a heap dump before and after.
| 
|  But while you bring GC into this, there is another interesting
|  thing.
|  - I have one slave running for a week which ends up around 18 to
|  20GB of heap memory.
|  - the slave goes offline for replication (no user queries on this
|  slave)
|  - the slave gets replicated and starts a new searcher
|  - the heap memory of the slave is still around 11 to 12GB
|  - then I initiate a Full GC from jconsole which brings it down to
|  about 8GB
|  - then I call optimize (on a optimized index) and it then drops to
|  6.5GB like a fresh started system
| 
| 
|  I have already looked through Uwe's blog but he says ...As a rule
|  of thumb: Don’t use more
|  than 1/4 of your physical memory as heap space for Java running
|  Lucene/Solr,...
|  That would be on my server 8GB for JVM heap, can't believe that the
|  system
|  will run for longer than 10 minutes with 8GB heap.
| 
| As you probably know, it depends hugely on the usecases/queries: some
| configurations would be fine with a small amount of heap, other
| configurations that facet and sort on tons of different fields would
| not be.
| 
| 
| -Yonik
| http://lucidworks.com
| 


Re: SOLR memory usage jump in JVM

2012-09-18 Thread Otis Gospodnetic
Hi Bernd,

On Tue, Sep 18, 2012 at 3:09 AM, Bernd Fehling
bernd.fehl...@uni-bielefeld.de wrote:
 Hi Otis,

 not really a problem because I have plenty of memory ;-)
 -Xmx25g -Xms25g -Xmn6g

Good.

 I'm just interested into this.
 Can you report similar jumps within JVM with your monitoring at sematext?

Yes. More importantly, SPM will show you a bunch of other Solr and
system metrics, so you can correlate them to your JVM heap jumps.  For
example, you may see the number of index file change at that time.  Or
higher request rate.  Or cache size growth.  Or ...

 Actually I would assume to see jumps of 0.5GB or even 1GB, but 5GB?
 And what is the cause, a cache?

Might be.  Please see above.  Of course, you could also try running a
profiler and analyzing the heap dump though you may need a lot of
RAM on your workstation for doing that. :)

 And is there another option in JVM to give memory jumps a size?

Doesn't -gc:verbose show heap jumps?

Otis
Search Analytics - http://sematext.com/search-analytics/index.html
Performance Monitoring - http://sematext.com/spm/index.html



 Am 18.09.2012 08:58, schrieb Otis Gospodnetic:
 Hi Bernd,

 But is this really (causing) a problem?  What -Xmx are you using?

 Otis
 Search Analytics - http://sematext.com/search-analytics/index.html
 Performance Monitoring - http://sematext.com/spm/index.html


 On Tue, Sep 18, 2012 at 2:50 AM, Bernd Fehling
 bernd.fehl...@uni-bielefeld.de wrote:
 Hi list,

 while monitoring my systems I see a jump in memory consumption in JVM
 after 2 to 5 days of running of about 5GB.

 After starting the system (search node only, no replication during search)
 SOLR uses between 6.5GB to 10.3GB of JVM when idle.
 If the search node is online and serves requests it uses between 7GB to 
 11.3GB.
 But after 2 to 5 days of running I see a jump in JVM with memory consumption
 of about 5GB. The JVM uses then between 13GB and 18GB.

 Anyone else seen this also?

 I analyzed the logs but no exceptions, no special queries, no long QTime.
 Also the GC log has nothing unusual at the first sight.

 Why is the JVM doing a jump of 5GB, which part of SOLR can cause such a 
 jump in JVM?

 I would accept a slowly growing of memory consumption, but a jump? of about 
 5GB?

 Regards
 Bernd