Hi Marc,

Why such a big heap?  Do you really need it?  You disabled all caches,
so the JVM really shouldn't need much memory.  Have you tried with
-Xmx20g or even -Xmx8g?  Aha, survivor is getting to 100% so you kept
increasing -Xmx?

Have you tried just not using any of these:
-XX:+UseG1GC -XX:NewRatio=1 -XX:SurvivorRatio=3 -XX:PermSize=728m
-XX:MaxPermSize=728m ?

My hunch is that there is a leak somewhere, because without caches you
shouldn't eed 40GB heap.

Otis
--
SOLR Performance Monitoring - http://sematext.com/spm/index.html
Solr & ElasticSearch Support
http://sematext.com/





On Wed, Apr 10, 2013 at 11:48 AM, Marc Des Garets
<marc.desgar...@192.com> wrote:
> Hi,
>
> I run multiple solr indexes in 1 single tomcat (1 webapp per index). All
> the indexes are solr 3.5 and I have upgraded few of them to solr 4.1
> (about half of them).
>
> The JVM behavior is now radically different and doesn't seem to make
> sense. I was using ConcMarkSweepGC. I am now trying the G1 collector.
>
> The perm gen went from 410Mb to 600Mb.
>
> The eden space usage is a lot bigger and the survivor space usage is
> 100% all the time.
>
> I don't really understand what is happening. GC behavior really doesn't
> seem right.
>
> My jvm settings:
> -d64 -server -Xms40g -Xmx40g -XX:+UseG1GC -XX:NewRatio=1
> -XX:SurvivorRatio=3 -XX:PermSize=728m -XX:MaxPermSize=728m
>
> I have tried NewRatio=1 and SurvivorRatio=3 hoping to get the Survivor
> space to not be 100% full all the time without success.
>
> Here is what jmap is giving me:
> Heap Configuration:
>    MinHeapFreeRatio = 40
>    MaxHeapFreeRatio = 70
>    MaxHeapSize      = 42949672960 (40960.0MB)
>    NewSize          = 1363144 (1.2999954223632812MB)
>    MaxNewSize       = 17592186044415 MB
>    OldSize          = 5452592 (5.1999969482421875MB)
>    NewRatio         = 1
>    SurvivorRatio    = 3
>    PermSize         = 754974720 (720.0MB)
>    MaxPermSize      = 763363328 (728.0MB)
>    G1HeapRegionSize = 16777216 (16.0MB)
>
> Heap Usage:
> G1 Heap:
>    regions  = 2560
>    capacity = 42949672960 (40960.0MB)
>    used     = 23786449912 (22684.526359558105MB)
>    free     = 19163223048 (18275.473640441895MB)
>    55.382144432514906% used
> G1 Young Generation:
> Eden Space:
>    regions  = 674
>    capacity = 20619198464 (19664.0MB)
>    used     = 11307843584 (10784.0MB)
>    free     = 9311354880 (8880.0MB)
>    54.841334418226204% used
> Survivor Space:
>    regions  = 115
>    capacity = 1929379840 (1840.0MB)
>    used     = 1929379840 (1840.0MB)
>    free     = 0 (0.0MB)
>    100.0% used
> G1 Old Generation:
>    regions  = 732
>    capacity = 20401094656 (19456.0MB)
>    used     = 10549226488 (10060.526359558105MB)
>    free     = 9851868168 (9395.473640441895MB)
>    51.70911985792612% used
> Perm Generation:
>    capacity = 754974720 (720.0MB)
>    used     = 514956504 (491.10079193115234MB)
>    free     = 240018216 (228.89920806884766MB)
>    68.20844332377116% used
>
> The Survivor space even went up to 3.6Gb but was still 100% used.
>
> I have disabled all caches.
>
> Obviously I am getting very bad GC performance.
>
> Any idea as to what could be wrong and why this could be happening?
>
>
> Thanks,
>
> Marc
>
>
> This transmission is strictly confidential, possibly legally privileged, and 
> intended solely for the addressee.
> Any views or opinions expressed within it are those of the author and do not 
> necessarily represent those of
> 192.com Ltd or any of its subsidiary companies. If you are not the intended 
> recipient then you must
> not disclose, copy or take any action in reliance of this transmission. If 
> you have received this
> transmission in error, please notify the sender as soon as possible. No 
> employee or agent is authorised
> to conclude any binding agreement on behalf 192.com Ltd with another party by 
> email without express written
> confirmation by an authorised employee of the company. http://www.192.com 
> (Tel: 08000 192 192).
> 192.com Ltd is incorporated in England and Wales, company number 07180348, 
> VAT No. GB 103226273.

Reply via email to