Hi,

On 11/07/2013 05:18 AM, Aaron Morton wrote:
>> Class Name                                                          
>>                                        | Shallow Heap | Retained Heap
>> -------------------------------------------------------------------------------------------------------------------------------------------
>>                                                                      
>>                                       |              |              
>> java.nio.HeapByteBuffer @ 0x7806a0848                                
>>                                       |           48 |            80
>> '- name org.apache.cassandra.db.Column @ 0x7806424e8                
>>                                        |           32 |           112
>>    |- [338530] java.lang.Object[540217] @ 0x57d62f560 Unreachable    
>>                                       |    2,160,888 |     2,160,888
>>    |- [338530] java.lang.Object[810325] @ 0x591546540                
>>                                       |    3,241,320 |     7,820,328
>>    |  '- elementData java.util.ArrayList @ 0x75e8424c0              
>>                                        |           24 |     7,820,352
>>    |     |- list
>> org.apache.cassandra.db.ArrayBackedSortedColumns$SlicesIterator @
>> 0x5940e0b18              |           48 |           128
>>    |     |  '- val$filteredIter
>> org.apache.cassandra.db.filter.SliceQueryFilter$1 @ 0x5940e0b48      
>>       |           32 |     7,820,568
>>    |     |     '- val$iter
>> org.apache.cassandra.db.filter.QueryFilter$2 @ 0x5940e0b68
>> Unreachable           |           24 |     7,820,592
>>    |     |- this$0, parent java.util.ArrayList$SubList @ 0x5940e0bb8
>>                                        |           40 |            40
>>    |     |  '- this$1 java.util.ArrayList$SubList$1 @ 0x5940e0be0    
>>                                       |           40 |            80
>>    |     |     '- currentSlice
>> org.apache.cassandra.db.ArrayBackedSortedColumns$SlicesIterator @
>> 0x5940e0b18|           48 |           128
>>    |     |        '- val$filteredIter
>> org.apache.cassandra.db.filter.SliceQueryFilter$1 @ 0x5940e0b48      
>> |           32 |     7,820,568
>>    |     |           '- val$iter
>> org.apache.cassandra.db.filter.QueryFilter$2 @ 0x5940e0b68
>> Unreachable     |           24 |     7,820,592
>>    |     |- columns org.apache.cassandra.db.ArrayBackedSortedColumns
>> @ 0x5b0a33488                          |           32 |            56
>>    |     |  '- val$cf
>> org.apache.cassandra.db.filter.SliceQueryFilter$1 @ 0x5940e0b48      
>>                 |           32 |     7,820,568
>>    |     |     '- val$iter
>> org.apache.cassandra.db.filter.QueryFilter$2 @ 0x5940e0b68
>> Unreachable           |           24 |     7,820,592
>>    |     '- Total: 3 entries                                        
>>                                        |              |              
>>    |- [338530] java.lang.Object[360145] @ 0x7736ce2f0 Unreachable    
>>                                       |    1,440,600 |     1,440,600
>>    '- Total: 3 entries                                              
>>                                        |              |              
>
> Are you doing large slices or do could you have a lot of tombstones on
> the rows ?
don't really know - how can I monitor that?
>
>> We have disabled row cache on one node to see  the  difference. Please
>> see attached plots from visual VM, I think that the effect is quite
>> visible.
> The default row cache is of the JVM heap, have you changed to
> the ConcurrentLinkedHashCacheProvider ?
Answered by Chris already :) No.
>
> One way the SerializingCacheProvider could impact GC is if the CF
> takes a lot of writes. The SerializingCacheProvider invalidates the
> row when it is written to and had to read the entire row and serialise
> it on a cache miss.
>
>>> -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms10G -Xmx10G
>>> -Xmn1024M -XX:+HeapDumpOnOutOfMemoryError
> You probably want the heap to be 4G to 8G in size, 10G will encounter
> longer pauses. 
> Also the size of the new heap may be too big depending on the number
> of cores. I would recommend trying 800M
I tried to decrease it first to 384M then to 128M with no change in the
behaviour. I don't really care extra memory overhead of the cache - to
be able to actual point to it with objects, but I don't really see the
reason why it should create/delete those many objects so quickly.
>
>
>> prg01.visual.vm.png
> Shows the heap growing very quickly. This could be due to wide reads
> or a high write throughput.
Well, both prg01 and prg02 receive the same load which is about ~150-250
(during peak) read requests per seconds and 100-160 write requests per
second. The only with heap growing rapidly and GC kicking in is on nodes
with row cache enabled.

>
> Hope that helps.
Thank you!

Jiri Horky

Reply via email to