Reverted back to 7.6.0 - same settings, but now I do not encounter the
large CPU usage.
-Joe
On 2/12/2019 12:37 PM, Joe Obernberger wrote:
Thank you Shawn. Yes, I used the settings off of your site. I've
restarted the cluster and the CPU usage is back up again. Looking at
it now, it doesn't
Thank you Shawn. Yes, I used the settings off of your site. I've
restarted the cluster and the CPU usage is back up again. Looking at it
now, it doesn't appear to be GC related.
Full log from one of the nodes that is pegging 13 CPU cores:
http://lovehorsepower.com/solr_gc.log.0.current
Thank
On 2/12/2019 7:35 AM, Joe Obernberger wrote:
Yesterday, we upgraded our 40 node cluster from solr 7.6.0 to solr
7.7.0. This morning, all the nodes are using 1200+% of CPU. It looks
like it's in garbage collection. We did reduce our HDFS cache size from
11G to 6G, but other than that, no
Yesterday, we upgraded our 40 node cluster from solr 7.6.0 to solr
7.7.0. This morning, all the nodes are using 1200+% of CPU. It looks
like it's in garbage collection. We did reduce our HDFS cache size from
11G to 6G, but other than that, no other parameters were changes.
Top shows:
top -
I read pretty much all posts on this thread (before and after this one).
Looks
like the main suggestion from you and others is to keep max heap size
(-Xmx)
as small as possible (as long as you don't see OOM exception).
I suggested absolute opposite; please note also that as small as possible
Master-Slave replica: new caches will be warmedprepopulated _before_ making
new IndexReader available for _new_ requests and _before_ discarding old one
- it means that theoretical sizing for FieldCache (which is defined by
number of docs in an index and cardinality of a field) should be
: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
requirements. For many applications, it is worth dedicating 10% of your
processing time to GC if that makes the worst-case pause short.
On the other hand, my
tricky to adjust the java options.
thanks.
From: wun...@wunderwood.org
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
requirements. For many
.
From: wun...@wunderwood.org
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
requirements. For many applications, it is worth dedicating 10% of your
processing time
@lucene.apache.org
Subject: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
requirements. For many applications, it is worth dedicating 10% of
your
processing time to GC if that makes the worst-case pause short
.
From: wun...@wunderwood.org
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
requirements. For many applications, it is worth dedicating 10
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
requirements. For many applications, it is worth dedicating 10% of
your
.
thanks.
From: wun...@wunderwood.org
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
not SoftReference)??
* Right now I have a single Tomcat hosting Solr and other applications. I guess
now it's better to have Solr on its own Tomcat, given that it's tricky to
adjust the java options.
thanks.
From: wun...@wunderwood.org
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage
Tomcat, given that it's tricky
to adjust the java options.
thanks.
From: wun...@wunderwood.org
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
Date: Fri, 25 Sep 2009 09:51:29 -0700
30ms is not better or worse than 1s until you look at the service
Ok... good news! Upgrading to the newest version of JVM 6 (update 6) seems
to solve this ugly bug. With the upgraded JVM I could run the solr servers
for more than 12 hours on the production environment with the GC mentioned
in the previous e-mails. The results are really amazing. The time spent
Do you have your GC logs? Are you still seeing major collections?
Where is the time spent?
Hard to say without some of that info.
The goal of the low pause collector is to finish collecting before the
tenured space is filled - if it doesn't, a standard major collection occurs.
The collector
How do you track major collections? Even better, how do you log your GC
behavior with details? Right now I just log total time spent on collections,
but I don't really know on which collections.Regard application performance
with the ConcMarkSweepGC, I think I didn't experience any impact for now.
: Jonathan Ariel ionat...@gmail.com
To: solr-user@lucene.apache.org
Sent: Monday, September 28, 2009 4:49:03 PM
Subject: Re: Solr and Garbage Collection
How do you track major collections? Even better, how do you log your GC
behavior with details? Right now I just log total time spent
|-verbose:gc
|
|[GC 325407K-83000K(776768K), 0.2300771 secs]
[GC 325816K-83372K(776768K), 0.2454258 secs]
[Full GC 267628K-83769K(776768K), 1.8479984 secs]|
Additional details with: |-XX:+PrintGCDetails|
|[GC [DefNew: 64575K-959K(64576K), 0.0457646 secs] 196016K-133633K(261184K),
0.0459067
Subject: Re: Solr and Garbage Collection
How do you track major collections? Even better, how do you log your GC
behavior with details? Right now I just log total time spent on collections,
but I don't really know on which collections.Regard application performance
with the ConcMarkSweepGC, I think
One way to track expensive is to look at the query time, QTime, in the solr
log.
There are a couple of tools for analyzing gc logs:
http://www.tagtraum.com/gcviewer.html
https://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=HPJMETER
They will give you frequency and
Yes, it seems like a bug. I will update my JVM, try again and let you
know the results :)
On 9/26/09, Mark Miller markrmil...@gmail.com wrote:
Jonathan Ariel wrote:
Ok. After the server ran for more than 12 hours, the time spent on GC
decreased from 11% to 3,4%, but 5 hours later it crashed.
Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: September-27-09 2:46 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
If he needed double the RAM, he'd likely know by now :) The JVM likes to
throw OOM exceptions when you need more RAM
warming up?
-Fuad
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: September-27-09 2:46 PM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
If he needed double the RAM, he'd likely know by now :) The JVM likes
You are running a very old version of Java 6 (update 6). The latest is
update 16. You should definitely upgrade. There is a bug in Java 6
starting with update 4 that may result in a corrupted Lucene/Solr index:
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6707044
Jonathan Ariel wrote:
I have around 8M documents.
Thats actually not so bad - I take it you are faceting/sorting on quite
a few unique fields?
I set up my server to use a different collector and it seems like it
decreased from 11% to 4%, of course I need to wait a bit more because it is
Ok. After the server ran for more than 12 hours, the time spent on GC
decreased from 11% to 3,4%, but 5 hours later it crashed. This is the thread
dump, maybe you can help identify what happened?
#
# An unexpected error has been detected by Java Runtime Environment:
#
# SIGSEGV (0xb) at
Jonathan Ariel wrote:
Ok. After the server ran for more than 12 hours, the time spent on GC
decreased from 11% to 3,4%, but 5 hours later it crashed. This is the thread
dump, maybe you can help identify what happened?
Well thats a tough ;) My guess is its a bug :)
Your two survivor spaces
Also, in case the info might help track something down:
Its pretty darn odd that both your survivor spaces are full. I've never
seen that ever in one of these dumps. Always one is empty. When one is
filled, its moved to the other. Then back. And forth. For a certain
number of times until its
should help with the long
pauses.
Colin.
-Original Message-
From: Jonathan Ariel [mailto:ionat...@gmail.com]
Sent: Friday, September 25, 2009 11:37 AM
To: solr-user@lucene.apache.org; yo...@lucidimagination.com
Subject: Re: Solr and Garbage Collection
Right, now I'm giving it 12GB of heap
: September-25-09 11:37 AM
To: solr-user@lucene.apache.org; yo...@lucidimagination.com
Subject: Re: Solr and Garbage Collection
Right, now I'm giving it 12GB of heap memory.
If I give it less (10GB) it throws the following exception:
Sep 5, 2009 7:18:32 PM org.apache.solr.common.SolrException log
You are saying that I should give more memory than 12GB?
Yes. Look at this:
SEVERE: java.lang.OutOfMemoryError: Java heap space
org.apache.lucene.search.FieldCacheImpl$10.createValue(FieldCacheImpl.java:3
61
)
It can't find few (!!!) contiguous bytes for .createValue(...)
It can't
It won't really - it will just keep the JVM from wasting time resizing
the heap on you. Since you know you need so much RAM anyway, no reason
not to just pin it at what you need.
Not going to help you much with GC though.
Jonathan Ariel wrote:
BTW why making them equal will lower the frequency
-server option of JVM is 'native CPU code', I remember WebLogic 7 console
with SUN JVM 1.3 not showing any GC (just horizontal line).
Not sure what that is all about either. -server and -client are just two
different versions of hotspot.
The -server version is optimized for long running
are the only way to get accurate cache eviction rates.
wunder
-Original Message-
From: Jonathan Ariel [mailto:ionat...@gmail.com]
Sent: Friday, September 25, 2009 9:34 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
BTW why making them equal will lower
to get accurate cache eviction rates.
wunder
-Original Message-
From: Jonathan Ariel [mailto:ionat...@gmail.com]
Sent: Friday, September 25, 2009 9:34 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
BTW why making them equal will lower the frequency
/tuning_the_ibm_jvm_for_large
_h.html
wunder
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Friday, September 25, 2009 10:03 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
Walter Underwood wrote:
30ms is not better or worse than 1s until you look
, 2009 10:03 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
Walter Underwood wrote:
30ms is not better or worse than 1s until you look at the service
requirements. For many applications, it is worth dedicating 10% of your
processing time to GC if that makes
Miller [mailto:markrmil...@gmail.com]
Sent: Friday, September 25, 2009 10:03 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
Walter Underwood wrote:
30ms is not better or worse than 1s until you look at the service
requirements. For many applications
, September 25, 2009 10:31 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
My bad - later, it looks as if your giving general advice, and thats
what I took issue with.
Any Collector that is not doing generational collection is essentially
from the dark ages and shouldn't be used
not too impressed by it's GC.
wunder
-Original Message-
From: Mark Miller [mailto:markrmil...@gmail.com]
Sent: Friday, September 25, 2009 10:31 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr and Garbage Collection
My bad - later, it looks as if your giving general advice
enough: in case if it is
heavily sparse... so that have even more RAM!
-Original Message-
From: Fuad Efendi [mailto:f...@efendi.ca]
Sent: September-25-09 12:17 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
You are saying that I should give more
is more than enough: in case if it is
heavily sparse... so that have even more RAM!
-Original Message-
From: Fuad Efendi [mailto:f...@efendi.ca]
Sent: September-25-09 12:17 PM
To: solr-user@lucene.apache.org
Subject: RE: Solr and Garbage Collection
You are saying that I should give more
Mark,
what if piece of code needs 10 contiguous Kb to load a document field? How
locked memory pieces are optimized/moved (putting on hold almost whole
application)?
Lowering heap is _bad_ idea; we will have extremely frequent GC (optimize of
live objects!!!) even if RAM is (theoretically)
I'm not planning on lowering the heap. I just want to lower the time
wasted on GC, which is 11% right now.So what I'll try is changing the GC
to -XX:+UseConcMarkSweepGC
On Fri, Sep 25, 2009 at 4:17 PM, Fuad Efendi f...@efendi.ca wrote:
Mark,
what if piece of code needs 10 contiguous Kb to
% is extremely high.
-Fuad
http://www.linkedin.com/in/liferay
-Original Message-
From: Jonathan Ariel [mailto:ionat...@gmail.com]
Sent: September-25-09 3:36 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Solr and Garbage Collection
I'm not planning on lowering the heap. I
On Fri, Sep 25, 2009 at 2:52 PM, Fuad Efendi f...@efendi.ca wrote:
Lowering heap helps GC?
Yes. In general, lowering the heap can help or hurt.
Hurt: if one is running very low on memory, GC will be working harder
all of the time trying to find more memory and the % of time that GC
takes can
-
From: Jonathan Ariel [mailto:ionat...@gmail.com]
Sent: September-25-09 3:36 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Solr and Garbage Collection
I'm not planning on lowering the heap. I just want to lower the time
wasted on GC, which is 11% right now.So what I'll try
:36 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Solr and Garbage Collection
I'm not planning on lowering the heap. I just want to lower the time
wasted on GC, which is 11% right now.So what I'll try is changing the
GC
to -XX:+UseConcMarkSweepGC
On Fri, Sep 25, 2009 at 4
...
11% is extremely high.
-Fuad
http://www.linkedin.com/in/liferay
-Original Message-
From: Jonathan Ariel [mailto:ionat...@gmail.com]
Sent: September-25-09 3:36 PM
To: solr-user@lucene.apache.org
Subject: Re: FW: Solr and Garbage Collection
I'm not planning on lowering
@lucene.apache.org
Subject: Re: FW: Solr and Garbage Collection
I'm not planning on lowering the heap. I just want to lower the time
wasted on GC, which is 11% right now.So what I'll try is changing the
GC
to -XX:+UseConcMarkSweepGC
On Fri, Sep 25, 2009 at 4:17 PM, Fuad
On Sep 25, 2009, at 9:30 AM, Jonathan Ariel wrote:
Hi to all!
Lately my solr servers seem to stop responding once in a while. I'm
using
solr 1.3.
Of course I'm having more traffic on the servers.
So I logged the Garbage Collection activity to check if it's because
of
that. It seems like
Usually, fragmentation is dealt with using a mark-compact collector (or
IBM has used a mark-sweep-compact collector).
Copying collectors are not only super efficient at collecting young
spaces, but they are also great for fragmentation - when you copy
everything to the new space, you can
Jonathan Ariel wrote:
How can I check which is the GC that it is being used? If I'm right JVM
Ergonomics should use the Throughput GC, but I'm not 100% sure. Do you have
any recommendation on this?
Just to straighten out this one too - Ergonomics doesn't use throughput
- throughput is the
Mark Miller wrote:
Jonathan Ariel wrote:
How can I check which is the GC that it is being used? If I'm right JVM
Ergonomics should use the Throughput GC, but I'm not 100% sure. Do you have
any recommendation on this?
Just to straighten out this one too - Ergonomics doesn't use
Thats a good point too - if you can reduce your need for such a large
heap, by all means, do so.
However, considering you already need at least 10GB or you get OOM, you
have a long way to go with that approach. Good luck :)
How many docs do you have ? I'm guessing its mostly FieldCache type
One more point and I'll stop - I've hit my email quota for the day ;)
While its a pain to have to juggle GC params and tune - when you require
a heap thats more than a gig or two, I personally believe its essential
to do so for good performance. The (default settings / ergonomics with
throughput)
I have around 8M documents.
I set up my server to use a different collector and it seems like it
decreased from 11% to 4%, of course I need to wait a bit more because it is
just a 1 hour old log. But it seems like it is much better now.
I will tell you on Monday the results :)
On Fri, Sep 25,
Sorry for OFF-topic:
Create dummy Hello, World! JSP, use Tomcat, execute load-stress
simulator(s) from separate machine(s), and measure... don't forget to
allocate necessary thread pools in Tomcat (if you have to)...
Although such JSP doesn't use any memory, you will see how easy one can go
with
60 matches
Mail list logo