Re: Solr 7.7.0 - Garbage Collection issue

2019-02-12 Thread Joe Obernberger
Reverted back to 7.6.0 - same settings, but now I do not encounter the large CPU usage. -Joe On 2/12/2019 12:37 PM, Joe Obernberger wrote: Thank you Shawn.  Yes, I used the settings off of your site. I've restarted the cluster and the CPU usage is back up again. Looking at it now, it doesn't

Re: Solr 7.7.0 - Garbage Collection issue

2019-02-12 Thread Joe Obernberger
Thank you Shawn.  Yes, I used the settings off of your site. I've restarted the cluster and the CPU usage is back up again. Looking at it now, it doesn't appear to be GC related. Full log from one of the nodes that is pegging 13 CPU cores: http://lovehorsepower.com/solr_gc.log.0.current Thank

Re: Solr 7.7.0 - Garbage Collection issue

2019-02-12 Thread Shawn Heisey
On 2/12/2019 7:35 AM, Joe Obernberger wrote: Yesterday, we upgraded our 40 node cluster from solr 7.6.0 to solr 7.7.0.  This morning, all the nodes are using 1200+% of CPU. It looks like it's in garbage collection.  We did reduce our HDFS cache size from 11G to 6G, but other than that, no

Solr 7.7.0 - Garbage Collection issue

2019-02-12 Thread Joe Obernberger
Yesterday, we upgraded our 40 node cluster from solr 7.6.0 to solr 7.7.0.  This morning, all the nodes are using 1200+% of CPU. It looks like it's in garbage collection.  We did reduce our HDFS cache size from 11G to 6G, but other than that, no other parameters were changes. Top shows: top -

RE: Solr and Garbage Collection

2009-10-06 Thread Fuad Efendi
I read pretty much all posts on this thread (before and after this one). Looks like the main suggestion from you and others is to keep max heap size (-Xmx) as small as possible (as long as you don't see OOM exception). I suggested absolute opposite; please note also that as small as possible

RE: Solr and Garbage Collection

2009-10-06 Thread Fuad Efendi
Master-Slave replica: new caches will be warmedprepopulated _before_ making new IndexReader available for _new_ requests and _before_ discarding old one - it means that theoretical sizing for FieldCache (which is defined by number of docs in an index and cardinality of a field) should be

Re: Solr and Garbage Collection

2009-10-03 Thread Mark Miller
: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service requirements. For many applications, it is worth dedicating 10% of your processing time to GC if that makes the worst-case pause short. On the other hand, my

Re: Solr and Garbage Collection

2009-10-03 Thread Bill Au
tricky to adjust the java options. thanks. From: wun...@wunderwood.org To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service requirements. For many

Re: Solr and Garbage Collection

2009-10-03 Thread Mark Miller
. From: wun...@wunderwood.org To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service requirements. For many applications, it is worth dedicating 10% of your processing time

Re: Solr and Garbage Collection

2009-10-03 Thread Bill Au
@lucene.apache.org Subject: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service requirements. For many applications, it is worth dedicating 10% of your processing time to GC if that makes the worst-case pause short

Re: Solr and Garbage Collection

2009-10-03 Thread Mark Miller
. From: wun...@wunderwood.org To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service requirements. For many applications, it is worth dedicating 10

Re: Solr and Garbage Collection

2009-10-03 Thread Mark Miller
To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service requirements. For many applications, it is worth dedicating 10% of your

Re: Solr and Garbage Collection

2009-10-03 Thread Mark Miller
. thanks. From: wun...@wunderwood.org To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service

RE: Solr and Garbage Collection

2009-10-02 Thread siping liu
not SoftReference)?? * Right now I have a single Tomcat hosting Solr and other applications. I guess now it's better to have Solr on its own Tomcat, given that it's tricky to adjust the java options. thanks. From: wun...@wunderwood.org To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage

Re: Solr and Garbage Collection

2009-10-02 Thread Mark Miller
Tomcat, given that it's tricky to adjust the java options. thanks. From: wun...@wunderwood.org To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection Date: Fri, 25 Sep 2009 09:51:29 -0700 30ms is not better or worse than 1s until you look at the service

Re: Solr and Garbage Collection

2009-09-28 Thread Jonathan Ariel
Ok... good news! Upgrading to the newest version of JVM 6 (update 6) seems to solve this ugly bug. With the upgraded JVM I could run the solr servers for more than 12 hours on the production environment with the GC mentioned in the previous e-mails. The results are really amazing. The time spent

Re: Solr and Garbage Collection

2009-09-28 Thread Mark Miller
Do you have your GC logs? Are you still seeing major collections? Where is the time spent? Hard to say without some of that info. The goal of the low pause collector is to finish collecting before the tenured space is filled - if it doesn't, a standard major collection occurs. The collector

Re: Solr and Garbage Collection

2009-09-28 Thread Jonathan Ariel
How do you track major collections? Even better, how do you log your GC behavior with details? Right now I just log total time spent on collections, but I don't really know on which collections.Regard application performance with the ConcMarkSweepGC, I think I didn't experience any impact for now.

Re: Solr and Garbage Collection

2009-09-28 Thread Otis Gospodnetic
: Jonathan Ariel ionat...@gmail.com To: solr-user@lucene.apache.org Sent: Monday, September 28, 2009 4:49:03 PM Subject: Re: Solr and Garbage Collection How do you track major collections? Even better, how do you log your GC behavior with details? Right now I just log total time spent

Re: Solr and Garbage Collection

2009-09-28 Thread Mark Miller
|-verbose:gc | |[GC 325407K-83000K(776768K), 0.2300771 secs] [GC 325816K-83372K(776768K), 0.2454258 secs] [Full GC 267628K-83769K(776768K), 1.8479984 secs]| Additional details with: |-XX:+PrintGCDetails| |[GC [DefNew: 64575K-959K(64576K), 0.0457646 secs] 196016K-133633K(261184K), 0.0459067

Re: Solr and Garbage Collection

2009-09-28 Thread Mark Miller
Subject: Re: Solr and Garbage Collection How do you track major collections? Even better, how do you log your GC behavior with details? Right now I just log total time spent on collections, but I don't really know on which collections.Regard application performance with the ConcMarkSweepGC, I think

Re: Solr and Garbage Collection

2009-09-28 Thread Bill Au
One way to track expensive is to look at the query time, QTime, in the solr log. There are a couple of tools for analyzing gc logs: http://www.tagtraum.com/gcviewer.html https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPJMETER They will give you frequency and

Re: Solr and Garbage Collection

2009-09-27 Thread Jonathan Ariel
Yes, it seems like a bug. I will update my JVM, try again and let you know the results :) On 9/26/09, Mark Miller markrmil...@gmail.com wrote: Jonathan Ariel wrote: Ok. After the server ran for more than 12 hours, the time spent on GC decreased from 11% to 3,4%, but 5 hours later it crashed.

Re: Solr and Garbage Collection

2009-09-27 Thread Jonathan Ariel
Message- From: Mark Miller [mailto:markrmil...@gmail.com] Sent: September-27-09 2:46 PM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection If he needed double the RAM, he'd likely know by now :) The JVM likes to throw OOM exceptions when you need more RAM

Re: Solr and Garbage Collection

2009-09-27 Thread Jonathan Ariel
warming up? -Fuad -Original Message- From: Mark Miller [mailto:markrmil...@gmail.com] Sent: September-27-09 2:46 PM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection If he needed double the RAM, he'd likely know by now :) The JVM likes

Re: Solr and Garbage Collection

2009-09-27 Thread Bill Au
You are running a very old version of Java 6 (update 6). The latest is update 16. You should definitely upgrade. There is a bug in Java 6 starting with update 4 that may result in a corrupted Lucene/Solr index: http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6707044

Re: Solr and Garbage Collection

2009-09-26 Thread Mark Miller
Jonathan Ariel wrote: I have around 8M documents. Thats actually not so bad - I take it you are faceting/sorting on quite a few unique fields? I set up my server to use a different collector and it seems like it decreased from 11% to 4%, of course I need to wait a bit more because it is

Re: Solr and Garbage Collection

2009-09-26 Thread Jonathan Ariel
Ok. After the server ran for more than 12 hours, the time spent on GC decreased from 11% to 3,4%, but 5 hours later it crashed. This is the thread dump, maybe you can help identify what happened? # # An unexpected error has been detected by Java Runtime Environment: # # SIGSEGV (0xb) at

Re: Solr and Garbage Collection

2009-09-26 Thread Mark Miller
Jonathan Ariel wrote: Ok. After the server ran for more than 12 hours, the time spent on GC decreased from 11% to 3,4%, but 5 hours later it crashed. This is the thread dump, maybe you can help identify what happened? Well thats a tough ;) My guess is its a bug :) Your two survivor spaces

Re: Solr and Garbage Collection

2009-09-26 Thread Mark Miller
Also, in case the info might help track something down: Its pretty darn odd that both your survivor spaces are full. I've never seen that ever in one of these dumps. Always one is empty. When one is filled, its moved to the other. Then back. And forth. For a certain number of times until its

RE: Solr and Garbage Collection

2009-09-25 Thread cbennett
should help with the long pauses. Colin. -Original Message- From: Jonathan Ariel [mailto:ionat...@gmail.com] Sent: Friday, September 25, 2009 11:37 AM To: solr-user@lucene.apache.org; yo...@lucidimagination.com Subject: Re: Solr and Garbage Collection Right, now I'm giving it 12GB of heap

RE: Solr and Garbage Collection

2009-09-25 Thread Fuad Efendi
: September-25-09 11:37 AM To: solr-user@lucene.apache.org; yo...@lucidimagination.com Subject: Re: Solr and Garbage Collection Right, now I'm giving it 12GB of heap memory. If I give it less (10GB) it throws the following exception: Sep 5, 2009 7:18:32 PM org.apache.solr.common.SolrException log

RE: Solr and Garbage Collection

2009-09-25 Thread Fuad Efendi
You are saying that I should give more memory than 12GB? Yes. Look at this: SEVERE: java.lang.OutOfMemoryError: Java heap space org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:3 61 ) It can't find few (!!!) contiguous bytes for .createValue(...) It can't

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
It won't really - it will just keep the JVM from wasting time resizing the heap on you. Since you know you need so much RAM anyway, no reason not to just pin it at what you need. Not going to help you much with GC though. Jonathan Ariel wrote: BTW why making them equal will lower the frequency

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
-server option of JVM is 'native CPU code', I remember WebLogic 7 console with SUN JVM 1.3 not showing any GC (just horizontal line). Not sure what that is all about either. -server and -client are just two different versions of hotspot. The -server version is optimized for long running

RE: Solr and Garbage Collection

2009-09-25 Thread Walter Underwood
are the only way to get accurate cache eviction rates. wunder -Original Message- From: Jonathan Ariel [mailto:ionat...@gmail.com] Sent: Friday, September 25, 2009 9:34 AM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection BTW why making them equal will lower

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
to get accurate cache eviction rates. wunder -Original Message- From: Jonathan Ariel [mailto:ionat...@gmail.com] Sent: Friday, September 25, 2009 9:34 AM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection BTW why making them equal will lower the frequency

RE: Solr and Garbage Collection

2009-09-25 Thread Walter Underwood
/tuning_the_ibm_jvm_for_large _h.html wunder -Original Message- From: Mark Miller [mailto:markrmil...@gmail.com] Sent: Friday, September 25, 2009 10:03 AM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection Walter Underwood wrote: 30ms is not better or worse than 1s until you look

Re: Solr and Garbage Collection

2009-09-25 Thread Jonathan Ariel
, 2009 10:03 AM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection Walter Underwood wrote: 30ms is not better or worse than 1s until you look at the service requirements. For many applications, it is worth dedicating 10% of your processing time to GC if that makes

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
Miller [mailto:markrmil...@gmail.com] Sent: Friday, September 25, 2009 10:03 AM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection Walter Underwood wrote: 30ms is not better or worse than 1s until you look at the service requirements. For many applications

RE: Solr and Garbage Collection

2009-09-25 Thread Walter Underwood
, September 25, 2009 10:31 AM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection My bad - later, it looks as if your giving general advice, and thats what I took issue with. Any Collector that is not doing generational collection is essentially from the dark ages and shouldn't be used

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
not too impressed by it's GC. wunder -Original Message- From: Mark Miller [mailto:markrmil...@gmail.com] Sent: Friday, September 25, 2009 10:31 AM To: solr-user@lucene.apache.org Subject: Re: Solr and Garbage Collection My bad - later, it looks as if your giving general advice

Re: FW: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
enough: in case if it is heavily sparse... so that have even more RAM! -Original Message- From: Fuad Efendi [mailto:f...@efendi.ca] Sent: September-25-09 12:17 PM To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection You are saying that I should give more

FW: Solr and Garbage Collection

2009-09-25 Thread Fuad Efendi
is more than enough: in case if it is heavily sparse... so that have even more RAM! -Original Message- From: Fuad Efendi [mailto:f...@efendi.ca] Sent: September-25-09 12:17 PM To: solr-user@lucene.apache.org Subject: RE: Solr and Garbage Collection You are saying that I should give more

RE: FW: Solr and Garbage Collection

2009-09-25 Thread Fuad Efendi
Mark, what if piece of code needs 10 contiguous Kb to load a document field? How locked memory pieces are optimized/moved (putting on hold almost whole application)? Lowering heap is _bad_ idea; we will have extremely frequent GC (optimize of live objects!!!) even if RAM is (theoretically)

Re: FW: Solr and Garbage Collection

2009-09-25 Thread Jonathan Ariel
I'm not planning on lowering the heap. I just want to lower the time wasted on GC, which is 11% right now.So what I'll try is changing the GC to -XX:+UseConcMarkSweepGC On Fri, Sep 25, 2009 at 4:17 PM, Fuad Efendi f...@efendi.ca wrote: Mark, what if piece of code needs 10 contiguous Kb to

RE: FW: Solr and Garbage Collection

2009-09-25 Thread Fuad Efendi
% is extremely high. -Fuad http://www.linkedin.com/in/liferay -Original Message- From: Jonathan Ariel [mailto:ionat...@gmail.com] Sent: September-25-09 3:36 PM To: solr-user@lucene.apache.org Subject: Re: FW: Solr and Garbage Collection I'm not planning on lowering the heap. I

Re: FW: Solr and Garbage Collection

2009-09-25 Thread Yonik Seeley
On Fri, Sep 25, 2009 at 2:52 PM, Fuad Efendi f...@efendi.ca wrote: Lowering heap helps GC? Yes. In general, lowering the heap can help or hurt. Hurt: if one is running very low on memory, GC will be working harder all of the time trying to find more memory and the % of time that GC takes can

Re: FW: Solr and Garbage Collection

2009-09-25 Thread Jonathan Ariel
- From: Jonathan Ariel [mailto:ionat...@gmail.com] Sent: September-25-09 3:36 PM To: solr-user@lucene.apache.org Subject: Re: FW: Solr and Garbage Collection I'm not planning on lowering the heap. I just want to lower the time wasted on GC, which is 11% right now.So what I'll try

Re: FW: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
:36 PM To: solr-user@lucene.apache.org Subject: Re: FW: Solr and Garbage Collection I'm not planning on lowering the heap. I just want to lower the time wasted on GC, which is 11% right now.So what I'll try is changing the GC to -XX:+UseConcMarkSweepGC On Fri, Sep 25, 2009 at 4

Re: Solr and Garbage Collection

2009-09-25 Thread Jonathan Ariel
... 11% is extremely high. -Fuad http://www.linkedin.com/in/liferay -Original Message- From: Jonathan Ariel [mailto:ionat...@gmail.com] Sent: September-25-09 3:36 PM To: solr-user@lucene.apache.org Subject: Re: FW: Solr and Garbage Collection I'm not planning on lowering

Re: FW: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
@lucene.apache.org Subject: Re: FW: Solr and Garbage Collection I'm not planning on lowering the heap. I just want to lower the time wasted on GC, which is 11% right now.So what I'll try is changing the GC to -XX:+UseConcMarkSweepGC On Fri, Sep 25, 2009 at 4:17 PM, Fuad

Re: Solr and Garbage Collection

2009-09-25 Thread Grant Ingersoll
On Sep 25, 2009, at 9:30 AM, Jonathan Ariel wrote: Hi to all! Lately my solr servers seem to stop responding once in a while. I'm using solr 1.3. Of course I'm having more traffic on the servers. So I logged the Garbage Collection activity to check if it's because of that. It seems like

RE: FW: Solr and Garbage Collection

2009-09-25 Thread Fuad Efendi
Usually, fragmentation is dealt with using a mark-compact collector (or IBM has used a mark-sweep-compact collector). Copying collectors are not only super efficient at collecting young spaces, but they are also great for fragmentation - when you copy everything to the new space, you can

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
Jonathan Ariel wrote: How can I check which is the GC that it is being used? If I'm right JVM Ergonomics should use the Throughput GC, but I'm not 100% sure. Do you have any recommendation on this? Just to straighten out this one too - Ergonomics doesn't use throughput - throughput is the

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
Mark Miller wrote: Jonathan Ariel wrote: How can I check which is the GC that it is being used? If I'm right JVM Ergonomics should use the Throughput GC, but I'm not 100% sure. Do you have any recommendation on this? Just to straighten out this one too - Ergonomics doesn't use

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
Thats a good point too - if you can reduce your need for such a large heap, by all means, do so. However, considering you already need at least 10GB or you get OOM, you have a long way to go with that approach. Good luck :) How many docs do you have ? I'm guessing its mostly FieldCache type

Re: Solr and Garbage Collection

2009-09-25 Thread Mark Miller
One more point and I'll stop - I've hit my email quota for the day ;) While its a pain to have to juggle GC params and tune - when you require a heap thats more than a gig or two, I personally believe its essential to do so for good performance. The (default settings / ergonomics with throughput)

Re: Solr and Garbage Collection

2009-09-25 Thread Jonathan Ariel
I have around 8M documents. I set up my server to use a different collector and it seems like it decreased from 11% to 4%, of course I need to wait a bit more because it is just a 1 hour old log. But it seems like it is much better now. I will tell you on Monday the results :) On Fri, Sep 25,

RE: Solr and Garbage Collection

2009-09-25 Thread Fuad Efendi
Sorry for OFF-topic: Create dummy Hello, World! JSP, use Tomcat, execute load-stress simulator(s) from separate machine(s), and measure... don't forget to allocate necessary thread pools in Tomcat (if you have to)... Although such JSP doesn't use any memory, you will see how easy one can go with