Several of our search engines use pretty large heaps (12-24GB).  That means
that if they *ever* do a full collection, disaster ensues because it can
take so long.

That means that we have to use concurrent collectors as much as possible and
make sure that the concurrent collectors get all the ephemeral garbage.  One
server, for instance, uses the following java options:

      -verbose:gc
      -XX:+PrintGCDetails -XX:+PrintGCTimeStamps
-XX:+PrintTenuringDistribution

These options give us lots of detail about what is happening in the
collections.  Most importantly, we need to know that the tenuring
distribution never has any significant tail of objects that might survive
into the space that will cause a full collection.  This is pretty safe in
general because our servers either create objects to respond to a single
request or create cached items that survive essentially forever.

      -XX:+UseParNewGC -XX:+UseConcMarkSweepGC

Concurrent collectors are critical.  We use the hbase recommendations here.

      -XX:MaxTenuringThreshold=6 -XX:SurvivorRatio=6

Max tenuring threshold is related to what we saw on the tenuring
distribution.  We very rarely see any objects last 4 collections so we set
it so that it would have to last two more collections in order to become
tenured.  The survivor ratio is related to this and is set based on
recommendations for non-stop, low latency servers.

      -XX:CMSInitiatingOccupancyFraction=60
-XX:+UseCMSInitiatingOccupancyOnly

CMS collections have couple of ways to be triggered.  We limit it to a
single way to make the world simpler.  Again, this is taken from outside
recommendations from the hbase guys and other commentors on the web.

      -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC

I doubt that these are important.  It is always nice to get more information
and I want to avoid any possibility of some library triggering a huge
collection.

      -XX:ParallelGCThreads=8

If the parallel GC needs horsepower, I want it to get it.

      -Xdebug

Very rarely useful, but a royal pain if not installed.  I don't know if it
has a performance impact (I think not).

      -Xms8000m -Xmx8000m

Setting the minimum heap helps avoid full GC's during the early life of the
server.


On Tue, Nov 10, 2009 at 11:27 AM, Patrick Hunt <ph...@apache.org> wrote:

> Can you elaborate on "gc tuning" - you are using the incremental collector?
>
> Patrick
>
>
> Ted Dunning wrote:
>
>> The server side is a fairly standard (but old) config:
>>
>> tickTime=2000
>> dataDir=/home/zookeeper/
>> clientPort=2181
>> initLimit=5
>> syncLimit=2
>>
>> Most of our clients now use 5 seconds as the timeout, but I think that we
>> went to longer timeouts in the past.  Without digging in to determine the
>> truth of the matter, my guess is that we needed the longer timeouts before
>> we tuned the GC parameters and that after tuning GC, we were able to
>> return
>> to a more reasonable timeout.  In retrospect, I think that we blamed EC2
>> for
>> some of our own GC misconfiguration.
>>
>> I would not use our configuration here as canonical since we didn't apply
>> a
>> whole lot of brainpower to this problem.
>>
>> On Tue, Nov 10, 2009 at 9:29 AM, Patrick Hunt <ph...@apache.org> wrote:
>>
>>  Ted, could you provide your configuration information for the cluster
>>> (incl
>>> the client timeout you use), if you're willing I'd be happy to put this
>>> up
>>> on the wiki for others interested in running in EC2.
>>>
>>>
>>
>>
>>


-- 
Ted Dunning, CTO
DeepDyve

Reply via email to