It looks like a memory leak to me. All Cassandra processes are with the
following JVM settings configured: -Xms10240M -Xmx10240M -Xmn2400M. The
OldGen reaches nearly the maximum and it looks like the oldgen objects not
being garbage collected.

Alexey


On Fri, May 23, 2014 at 3:56 PM, Nate McCall <n...@thelastpickle.com> wrote:

> Is compaction keeping up? Those replication settings are high - for every
> write, 10 nodes are doing something).
>
> What other monitoring stats do you have - what is IO, CPU and network
> traffic like? Is the JVM GC activity growing?
>
> Anything else stick out like growing number of network connections to 9160
> or 9042 on the cluster?
>
> The only leak-y things I've seen lately could either be:
>
>  from a JVM bug leaking permgen from JMX invocation:
> https://issues.apache.org/jira/browse/CASSANDRA-6541
>
> or if you have anything using thrift (this includes opscenter) and you are
> using HSHA as the rpc_server_type:
> https://issues.apache.org/jira/browse/CASSANDRA-4265
>
> Otherwise, I think this is config/setup tuning.
>
>
>
>
> On Fri, May 23, 2014 at 7:00 AM, Alexey <alexe...@gmail.com> wrote:
>
>> Hi all,
>>
>> I've noticed increased latency on our tomcat REST-service (average 30ms,
>> max > 2sec). We are using Cassandra 1.2.16 with official DataStax Java
>> driver v1.0.3.
>>
>> Our setup:
>>
>> * 2 DCs
>> * each DC: 7 nodes
>> * RF=5
>> * Leveled compaction
>>
>> After cassandra restart on all nodes, the latencies are alright again
>> (average < 5ms, max 50ms).
>>
>> Any thoughts are greatly appreciated.
>>
>> Thanks,
>> Alexey
>>
>
>
>
> --
> -----------------
> Nate McCall
> Austin, TX
> @zznate
>
> Co-Founder & Sr. Technical Consultant
> Apache Cassandra Consulting
> http://www.thelastpickle.com
>

Reply via email to