Hello everyone,
First of all here is our Solr setup:
- Solr nightly build 986158
- Running solr inside the default jetty comes with solr build
- 1 write only Master , 4 read only Slaves (quad core 5640 with 24gb of RAM)
- Index replicated (on optimize) to slaves via Solr Replication
- Size of ind
Hi Doğacan,
Are you, at some point, running out of heap space? In my experience, that's
the common cause of increased load and excessivly high response times (or time
outs).
Cheers,
> Hello everyone,
>
> First of all here is our Solr setup:
>
> - Solr nightly build 986158
> - Running solr in
Hello,
2011/3/14 Markus Jelsma
> Hi Doğacan,
>
> Are you, at some point, running out of heap space? In my experience, that's
> the common cause of increased load and excessivly high response times (or
> time
> outs).
>
>
How much of a heap size would be enough? Our index size is growing slowly
b
> Hello,
>
> 2011/3/14 Markus Jelsma
>
> > Hi Doğacan,
> >
> > Are you, at some point, running out of heap space? In my experience,
> > that's the common cause of increased load and excessivly high response
> > times (or time
> > outs).
>
> How much of a heap size would be enough? Our index si
I've definitely had cases in 1.4.1 where even though I didn't have an
OOM error, Solr was being weirdly slow, and increasing the JVM heap size
fixed it. I can't explain why it happened, or exactly how you'd know
this was going on, I didn't see anything odd in the logs to indicate, I
just tried
Hello again,
2011/3/14 Markus Jelsma
> > Hello,
> >
> > 2011/3/14 Markus Jelsma
> >
> > > Hi Doğacan,
> > >
> > > Are you, at some point, running out of heap space? In my experience,
> > > that's the common cause of increased load and excessivly high response
> > > times (or time
> > > outs).
>
> Nope, no OOM errors.
That's a good start!
> Insanity count is 0 and fieldCAche has 12 entries. We do use some boosting
> functions.
>
> Btw, I am monitoring output via jconsole with 8gb of ram and it still goes
> to 8gb every 20 seconds or so,
> gc runs, falls down to 1gb.
Hmm, maybe the garb
It's actually, as I understand it, expected JVM behavior to see the heap
rise to close to it's limit before it gets GC'd, that's how Java GC
works. Whether that should happen every 20 seconds or what, I don't nkow.
Another option is setting better JVM garbage collection arguments, so GC
doesn
You might also want to add the following switches for your GC log.
> JAVA_OPTS="$JAVA_OPTS -verbose:gc -XX:+PrintGCTimeStamps
> -XX:+PrintGCDetails - Xloggc:/var/log/tomcat6/gc.log"
-XX:+PrintGCApplicationConcurrentTime
-XX:+PrintGCApplicationStoppedTime
>
> Also, what JVM version are you using
That depends on your GC settings and generation sizes. And, instead of
UseParallelGC you'd better use UseParNewGC in combination with CMS.
See 22: http://java.sun.com/docs/hotspot/gc1.4.2/faq.html
> It's actually, as I understand it, expected JVM behavior to see the heap
> rise to close to it's
Hello,
2011/3/14 Markus Jelsma
> That depends on your GC settings and generation sizes. And, instead of
> UseParallelGC you'd better use UseParNewGC in combination with CMS.
>
>
JConsole now shows a different profile output but load is still high and
performance is still bad.
Btw, here is the t
Mmm. SearchHander.handleRequestBody takes care of sharding. Could your system
suffer from http://wiki.apache.org/solr/DistributedSearch#Distributed_Deadlock
?
I'm not sure, i haven't seen a similar issue in a sharded environment,
probably because it was a controlled environment.
> Hello,
>
>
2011/3/14 Markus Jelsma
> Mmm. SearchHander.handleRequestBody takes care of sharding. Could your
> system
> suffer from
> http://wiki.apache.org/solr/DistributedSearch#Distributed_Deadlock
> ?
>
>
We increased thread limit (which was 1 before) but it did not help.
Anyway, we will try to disa
My solr+jetty+java6 install seems to work well with these GC options.
It's a dual processor environment:
-XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode
I've never had a real problem with memory, so I've not done any kind of
auditing. I probably should, but time is a limited resource.
Shaw
CMS is very good for multicore CPU's. Use incremental mode only when you have
a single CPU with only one or two cores.
On Tuesday 15 March 2011 16:03:38 Shawn Heisey wrote:
> My solr+jetty+java6 install seems to work well with these GC options.
> It's a dual processor environment:
>
> -XX:+UseCo
The host is dual quad-core, each Xen VM has been given two CPUs. Not
counting dom0, two of the hosts have 10/8 CPUs allocated, two of them
have 8/8. The dom0 VM is also allocated two CPUs.
I'm not really sure how that works out when it comes to Java running on
the VM, but if at all possible,
> Btw, I am monitoring output via jconsole with 8gb of ram and it still goes
> to 8gb every 20 seconds or so,
> gc runs, falls down to 1gb.
Hmm, jvm is eating 8Gb for 20 seconds - sounds a lot.
Do you return all results (ids) for your queries? Any tricky
faceting/sorting/function queries?
Hello,
The problem turned out to be some sort of sharding/searching weirdness. We
modified some code in sharding but I don't think it is related. In any case,
we just added a new server that just shards (but doesn't do any searching /
doesn't contain any index) and performance is very very good.
18 matches
Mail list logo