Do you have a lot of individual tables?  Or lots of small compactions?

I think the general consensus is that (at least for Cassandra), 8GB heaps
are ideal.

If you have lots of small tables it’s a known anti-pattern (I believe)
because the Cassandra internals could do a better job on handling the in
memory metadata representation.

I think this has been improved in 2.0 and 2.1 though so the fact that
you’re on 1.2.18 could exasperate the issue.  You might want to consider an
upgrade (though that has its own issues as well).

On Sun, Feb 8, 2015 at 12:44 PM, Jiri Horky <ho...@avast.com> wrote:

> Hi all,
>
> we are seeing quite high GC pressure (in old space by CMS GC Algorithm)
> on a node with 4TB of data. It runs C* 1.2.18 with 12G of heap memory
> (2G for new space). The node runs fine for couple of days when the GC
> activity starts to raise and reaches about 15% of the C* activity which
> causes dropped messages and other problems.
>
> Taking a look at heap dump, there is about 8G used by SSTableReader
> classes in org.apache.cassandra.io.compress.CompressedRandomAccessReader.
>
> Is this something expected and we have just reached the limit of how
> many data a single Cassandra instance can handle or it is possible to
> tune it better?
>
> Regards
> Jiri Horky
>



-- 

Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>

Reply via email to