do hard commits
>> relatively frequently rather than only soft commits.)
>>
>> -- Jack Krupansky
>>
>> -Original Message-
>> From: Marc Des Garets
>> Sent: Thursday, April 11, 2013 3:07 AM
>> To: solr-user@lucene.apache.org
>> Subject: Re
s Garets
> Sent: Thursday, April 11, 2013 3:07 AM
> To: solr-user@lucene.apache.org
> Subject: Re: migration solr 3.5 to 4.1 - JVM GC problems
>
> Big heap because very large number of requests with more than 60 indexes
> and hundreds of million of documents (all indexes together). My problem
&g
enabled (or make sure you do hard commits
relatively frequently rather than only soft commits.)
-- Jack Krupansky
-Original Message-
From: Marc Des Garets
Sent: Thursday, April 11, 2013 3:07 AM
To: solr-user@lucene.apache.org
Subject: Re: migration solr 3.5 to 4.1 - JVM GC problems
Big
I have 45 solr 4.1 indexes. Sizes vary between 20Gb and 2.2Gb.
- 1 is 20Gb (80 million docs)
- 1 is 5.1Gb (24 million docs)
- 1 is 5.6Gb (26 million docs)
- 1 is 6.5Gb (28 million docs)
- 11 others are about 2.2Gb (6-7 million docs).
- 20 others are about 600Mb (2.5 million docs)
That reminds me
Hi Marc;
Could I learn your index size and what is your performance measure as query
per second?
2013/4/11 Marc Des Garets
> Big heap because very large number of requests with more than 60 indexes
> and hundreds of million of documents (all indexes together). My problem
> is with solr 4.1. All
Big heap because very large number of requests with more than 60 indexes
and hundreds of million of documents (all indexes together). My problem
is with solr 4.1. All is perfect with 3.5. I have 0.05 sec GCs every 1
or 2mn and 20Gb of the heap is used.
With the 4.1 indexes it uses 30Gb-33Gb, the s
On 4/10/2013 9:48 AM, Marc Des Garets wrote:
The JVM behavior is now radically different and doesn't seem to make
sense. I was using ConcMarkSweepGC. I am now trying the G1 collector.
The perm gen went from 410Mb to 600Mb.
The eden space usage is a lot bigger and the survivor space usage is
100
Hi Marc,
Why such a big heap? Do you really need it? You disabled all caches,
so the JVM really shouldn't need much memory. Have you tried with
-Xmx20g or even -Xmx8g? Aha, survivor is getting to 100% so you kept
increasing -Xmx?
Have you tried just not using any of these:
-XX:+UseG1GC -XX:Ne
Hi,
I run multiple solr indexes in 1 single tomcat (1 webapp per index). All
the indexes are solr 3.5 and I have upgraded few of them to solr 4.1
(about half of them).
The JVM behavior is now radically different and doesn't seem to make
sense. I was using ConcMarkSweepGC. I am now trying the G1 c