Upgrade to 6.x and get, in general, decent JVM settings. And decrease your 
heap, having it so extremely large is detrimental at best.

Our shards can be 25 GB in size, but we run fine (apart from other problems 
recently discovered) with a 900 MB heap, so you probably have a lot of room to 
spare. Your max heap is over a 100 times larger than ours, your index just 
around 16 times. It should work with less.

As a bonus, with a smaller heap, you can have much more index data in mapped 
memory.

Regards,
Markus

-----Original message-----
> From:David Hastings <hastings.recurs...@gmail.com>
> Sent: Tuesday 25th July 2017 22:15
> To: solr-user@lucene.apache.org
> Subject: Re: Optimize stalls at the same point
> 
> it turned out that i think it was a large GC operation, as it has since
> resumed optimizing.  current java options are as follows for the indexing
> server (they are different for the search servers) if you have any
> suggestions as to changes I am more than happy to hear them, honestly they
> have just been passed down from one installation to the next ever since we
> used to use tomcat to host solr
> -server -Xss256k -d64 -Xmx100000m -Xms7000m-XX:NewRatio=3
> -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> -XX:ParallelGCThreads=8 -XX:+CMSScavengeBeforeRemark
> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
> -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled -verbose:gc
> -XX:+PrintHeapAtGC -XX:+PrintGCDetails -XX:+PrintGCDateStamps
> -XX:+PrintGCTimeStamps -XX:+PrintTenuringDistribution
> -XX:+PrintGCApplicationStoppedTime
> -Xloggc:XXXXXX/solr-5.2.1/server/logs/solr_gc.log
> 
> and for my live searchers i use:
> server Xss256k Xms50000m Xmx50000m XX:NewRatio=3 XX:SurvivorRatio=4
> XX:TargetSurvivorRatio=90 XX:MaxTenuringThreshold=8 XX:+UseConcMarkSweepGC
> XX:+UseParNewGC XX:ConcGCThreads=4 XX:ParallelGCThreads=8
> XX:+CMSScavengeBeforeRemark XX:PretenureSizeThreshold=64m
> XX:+UseCMSInitiatingOccupancyOnly XX:CMSInitiatingOccupancyFraction=50
> XX:CMSMaxAbortablePrecleanTime=6000 XX:+CMSParallelRemarkEnabled
> XX:+ParallelRefProcEnabled verbose:gc XX:+PrintHeapAtGC XX:+PrintGCDetails
> XX:+PrintGCDateStamps XX:+PrintGCTimeStamps XX:+PrintTenuringDistribution
> XX:+PrintGCApplicationStoppedTime Xloggc:/SSD2TB01/solr
> 5.2.1/server/logs/solr_gc.log
> 
> 
> 
> On Tue, Jul 25, 2017 at 4:02 PM, Walter Underwood <wun...@wunderwood.org>
> wrote:
> 
> > Are you sure you need a 100GB heap? The stall could be a major GC.
> >
> > We run with an 8GB heap. We also run with Xmx equal to Xms, growing memory
> > to the max was really time-consuming after startup.
> >
> > What version of Java? What GC options?
> >
> > wunder
> > Walter Underwood
> > wun...@wunderwood.org
> > http://observer.wunderwood.org/  (my blog)
> >
> >
> > > On Jul 25, 2017, at 12:03 PM, David Hastings <
> > hastings.recurs...@gmail.com> wrote:
> > >
> > > I am trying to optimize a rather large index (417gb) because its sitting
> > at
> > > 28% deletions.  However when optimizing, it stops at exactly 492.24 GB
> > > every time.  When I restart solr it will fall back down to 417 gb, and
> > > again, if i send an optimize command, the exact same 492.24 GB and it
> > stops
> > > optimizing.  There is plenty of space on the drive, and im running it
> > > at -Xmx100000m -Xms7000m on a machine with 132gb of ram and 24 cores.  I
> > > have never ran into this problem before but also never had the index get
> > > this large.  Any ideas?
> > > (solr 5.2 btw)
> > > thanks,
> > > -Dave
> >
> >
> 

Reply via email to