On Mon, Sep 13, 2010 at 6:45 PM, Burton-West, Tom <tburt...@umich.edu> wrote:
> Thanks Kent for your info.
>
> We are not doing any faceting, sorting, or much else.  My guess is that most 
> of the memory increase is just the data structures created when parts of the 
> frq and prx files get read into memory.  Our frq files are about 77GB  and 
> the prx files are about 260GB per shard and we are running 3 shards per 
> machine.   I suspect that the document cache and query result cache don't 
> take up that much space, but will try a run with those caches set to 0, just 
> to see.
>
> We have dual 4 core processors and 74GB total memory.  We want to leave a 
> significant amount of memory free for OS disk caching.
>
> We tried increasing the memory from 20GB to 28GB and adding the 
> -XXMaxGCPauseMillis=1000 flag but that seemed to have no effect.
>
> Currently I'm testing using the ConcurrentMarkSweep and that's looking much 
> better although I don't understand why it has sized the Eden space down into 
> the 20MB range. However, I am very new to Java memory management.
>
> Anyone know if when using ConcurrentMarkSweep its better to let the JVM size 
> the Eden space or better to give it some hints?

Really the best thing to do is to run the system for a while with GC
logging on and then look at how often the young generation GC is
occurring.  A set of parameters like:

-verbose:gc -XX:+PrintGCTimeStamps  -XX:+PrintGCDetails

Should give you some indication how often the young gen GC is
occurring.  If it's often, you can try increasing the size of the
young generation.  The option:

-Xloggc:<some file>

will dump this information to the specified file rather than sending
it to the standard error.

I've done this a few times with a variety of systems:  some times you
want to make the young gen bigger and some times you don't.

Steve
-- 
Stephen Green
http://thesearchguy.wordpress.com

Reply via email to