Not long: Uptime (seconds) : 6828

Token            : 56713727820156410577229101238628035242
ID               : c796609a-a050-48df-bf56-bb09091376d9
Gossip active    : true
Thrift active    : true
Native Transport active: false
Load             : 49.71 GB
Generation No    : 1386344053
Uptime (seconds) : 6828
Heap Memory (MB) : 2409.71 / 8112.00
Data Center      : DC
Rack             : RAC-1
Exceptions       : 0
Key Cache        : size 56154704 (bytes), capacity 104857600 (bytes), 27
hits, 155669426 requests, 0.000 recent hit rate, 14400 save period in
seconds
Row Cache        : size 0 (bytes), capacity 0 (bytes), 0 hits, 0 requests,
NaN recent hit rate, 0 save period in seconds


On Fri, Dec 6, 2013 at 11:15 AM, Vicky Kak <vicky....@gmail.com> wrote:

> Since how long the server had been up, hours,days,months....?
>
>
> On Fri, Dec 6, 2013 at 10:41 PM, srmore <comom...@gmail.com> wrote:
>
>> Looks like I am spending some time in GC.
>>
>> java.lang:type=GarbageCollector,name=ConcurrentMarkSweep
>>
>> CollectionTime = 51707;
>> CollectionCount = 103;
>>
>> java.lang:type=GarbageCollector,name=ParNew
>>
>>  CollectionTime = 466835;
>>  CollectionCount = 21315;
>>
>>
>> On Fri, Dec 6, 2013 at 9:58 AM, Jason Wee <peich...@gmail.com> wrote:
>>
>>> Hi srmore,
>>>
>>> Perhaps if you use jconsole and connect to the jvm using jmx. Then uner
>>> MBeans tab, start inspecting the GC metrics.
>>>
>>> /Jason
>>>
>>>
>>> On Fri, Dec 6, 2013 at 11:40 PM, srmore <comom...@gmail.com> wrote:
>>>
>>>>
>>>>
>>>>
>>>> On Fri, Dec 6, 2013 at 9:32 AM, Vicky Kak <vicky....@gmail.com> wrote:
>>>>
>>>>> Hard to say much without knowing about the cassandra configurations.
>>>>>
>>>>
>>>> The cassandra configuration is
>>>> -Xms8G
>>>> -Xmx8G
>>>> -Xmn800m
>>>> -XX:+UseParNewGC
>>>> -XX:+UseConcMarkSweepGC
>>>> -XX:+CMSParallelRemarkEnabled
>>>> -XX:SurvivorRatio=4
>>>> -XX:MaxTenuringThreshold=2
>>>> -XX:CMSInitiatingOccupancyFraction=75
>>>> -XX:+UseCMSInitiatingOccupancyOnly
>>>>
>>>>
>>>>
>>>>> Yes compactions/GC's could skipe the CPU, I had similar behavior with
>>>>> my setup.
>>>>>
>>>>
>>>> Were you able to get around it ?
>>>>
>>>>
>>>>>
>>>>> -VK
>>>>>
>>>>>
>>>>> On Fri, Dec 6, 2013 at 7:40 PM, srmore <comom...@gmail.com> wrote:
>>>>>
>>>>>> We have a 3 node cluster running cassandra 1.2.12, they are pretty
>>>>>> big machines 64G ram with 16 cores, cassandra heap is 8G.
>>>>>>
>>>>>> The interesting observation is that, when I send traffic to one node
>>>>>> its performance is 2x more than when I send traffic to all the nodes. We
>>>>>> ran 1.0.11 on the same box and we observed a slight dip but not half as
>>>>>> seen with 1.2.12. In both the cases we were writing with LOCAL_QUORUM.
>>>>>> Changing CL to ONE make a slight improvement but not much.
>>>>>>
>>>>>> The read_Repair_chance is 0.1. We see some compactions running.
>>>>>>
>>>>>> following is my iostat -x output, sda is the ssd (for commit log) and
>>>>>> sdb is the spinner.
>>>>>>
>>>>>> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>>>>>>           66.46    0.00    8.95    0.01    0.00   24.58
>>>>>>
>>>>>> Device:         rrqm/s   wrqm/s   r/s   w/s   rsec/s   wsec/s
>>>>>> avgrq-sz avgqu-sz   await  svctm  %util
>>>>>> sda               0.00    27.60  0.00  4.40     0.00   256.00
>>>>>> 58.18     0.01    2.55   1.32   0.58
>>>>>> sda1              0.00     0.00  0.00  0.00     0.00     0.00
>>>>>> 0.00     0.00    0.00   0.00   0.00
>>>>>> sda2              0.00    27.60  0.00  4.40     0.00   256.00
>>>>>> 58.18     0.01    2.55   1.32   0.58
>>>>>> sdb               0.00     0.00  0.00  0.00     0.00     0.00
>>>>>> 0.00     0.00    0.00   0.00   0.00
>>>>>> sdb1              0.00     0.00  0.00  0.00     0.00     0.00
>>>>>> 0.00     0.00    0.00   0.00   0.00
>>>>>> dm-0              0.00     0.00  0.00  0.00     0.00     0.00
>>>>>> 0.00     0.00    0.00   0.00   0.00
>>>>>> dm-1              0.00     0.00  0.00  0.60     0.00     4.80
>>>>>> 8.00     0.00    5.33   2.67   0.16
>>>>>> dm-2              0.00     0.00  0.00  0.00     0.00     0.00
>>>>>> 0.00     0.00    0.00   0.00   0.00
>>>>>> dm-3              0.00     0.00  0.00 24.80     0.00   198.40
>>>>>> 8.00     0.24    9.80   0.13   0.32
>>>>>> dm-4              0.00     0.00  0.00  6.60     0.00    52.80
>>>>>> 8.00     0.01    1.36   0.55   0.36
>>>>>> dm-5              0.00     0.00  0.00  0.00     0.00     0.00
>>>>>> 0.00     0.00    0.00   0.00   0.00
>>>>>> dm-6              0.00     0.00  0.00 24.80     0.00   198.40
>>>>>> 8.00     0.29   11.60   0.13   0.32
>>>>>>
>>>>>>
>>>>>>
>>>>>> I can see I am cpu bound here but couldn't figure out exactly what is
>>>>>> causing it, is this caused by GC or Compaction ? I am thinking it is
>>>>>> compaction, I see a lot of context switches and interrupts in my vmstat
>>>>>> output.
>>>>>>
>>>>>> I don't see GC activity in the logs but see some compaction activity.
>>>>>> Has anyone seen this ? or know what can be done to free up the CPU.
>>>>>>
>>>>>> Thanks,
>>>>>> Sandeep
>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Reply via email to