On 1/12/2015 5:34 AM, Thomas Lamy wrote:
> I found no big/unusual GC pauses in the Log (at least manually; I
> found no free solution to analyze them that worked out of the box on a
> headless debian wheezy box). Eventually i tried with -Xmx8G (was 64G
> before) on one of the nodes, after checking allocation after 1 hour
> run time was at about 2-3GB. That didn't move the time frame where a
> restart was needed, so I don't think Solr's JVM GC is the problem.
> We're trying to get all of our node's logs (zookeeper and solr) into
> Splunk now, just to get a better sorted view of what's going on in the
> cloud once a problem occurs. We're also enabling GC logging for
> zookeeper; maybe we were missing problems there while focussing on
> solr logs.

If you make a copy of the gc log, you can put it on another system with
a GUI and graph it with this:

http://sourceforge.net/projects/gcviewer

Just double-click on the jar to run the program.  I find it is useful
for clarity on the graph to go to the View menu and uncheck everything
except the two "GC Times" options.  You can also change the zoom to a
lower percentage so you can see more of the graph.

That program is how I got the graph you can see on my wiki page about GC
tuning:

http://wiki.apache.org/solr/ShawnHeisey#GC_Tuning

Another possible problem is that your install is exhausting the thread
pool.  Tomcat defaults to a maxThreads value of only 200.  There's a
good chance that your setup will need more than 200 threads at least
occasionally.  If you're near the limit, having a thread problem once
per day based on index activity seems like a good possibility.  Try
setting maxThreads to 10000 in the Tomcat config.

Thanks,
Shawn

Reply via email to