nodetool gcstat tells me this (the Total GC Elapsed is half or more of the
Interval).

We had to take the production load off the new vnode DC, since it was
messing things up badly. It means I'm not able to run any tools against it
at the moment.
The env.sh is default, and the servers have 8G ram.

It would be great if you could respond to my initial question though.
Thanks,
Tom

On Wed, Sep 23, 2015 at 4:14 PM, Sebastian Estevez <
sebastian.este...@datastax.com> wrote:

> This is interesting, where are you seeing that you're collecting 50% of
> the time? Is your env.sh the default? How much ram?
>
> Also, can you run this tool and send a minute worth of thread info:
>
> wget
> https://bintray.com/artifact/download/aragozin/generic/sjk-plus-0.3.6.jar
> java -jar sjk-plus-0.3.6.jar ttop -s localhost:7199 -n 30 -o CPU
> On Sep 23, 2015 7:09 AM, "Tom van den Berge" <tom.vandenbe...@gmail.com>
> wrote:
>
>> I have two data centers, each with the same number of nodes, same
>> hardware (CPUs, memory), Cassandra version (2.1.6), replication factory,
>> etc. The only difference it that one data center uses vnodes, and the other
>> doesn't.
>>
>> The non-vnode DC works fine (and has been for a long time) under
>> production load: I'm seeing normal CPU and IO load and garbage collection
>> figures. But the vnode DC is struggling very hard under the same load. It
>> has been set up recently. The CPU load is very high, due to excessive
>> garbage collection (>50% of the time is spent collecting).
>>
>> So it seems that Cassandra simply doesn't have enough memory. I'm trying
>> to understand if this can be caused by the use of vnodes? Is there an
>> sensible reason why vnodes would consume more memory than regular nodes? Or
>> does any of you have the same experience? If not, I might be barking up the
>> wrong tree here, and I would love to know it before upgrading my servers
>> with more memory.
>>
>> Thanks,
>> Tom
>>
>

Reply via email to