[jira] [Commented] (CASSANDRA-8066) High Heap Consumption due to high number of SSTableReader

2014-10-13 Thread Benoit Lacelle (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14169722#comment-14169722
 ] 

Benoit Lacelle commented on CASSANDRA-8066:
---

I have a single node. The schema holds around 8 tables. A few of them are 
counter tables.

I regularly run nearly-full scan on on a not-counter table. Else, it is mainly 
an equivalent of reads and writes on 2 given tables. These tables have a few 
million of rows.

Do you need more details?

 High Heap Consumption due to high number of SSTableReader
 -

 Key: CASSANDRA-8066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8066
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
Reporter: Benoit Lacelle
Assignee: T Jake Luciani
 Fix For: 2.1.1


 Given a workload with quite a lot of reads, I recently encountered high heap 
 memory consumption. Given 2GB of Heap, it appears I have 750.000+ tasks in 
 SSTableReader.syncExecutor, consuming more than 1.2GB. These tasks have type 
 SSTableReader$5, which I guess corresponds to :
 {code}
 readMeterSyncFuture = syncExecutor.scheduleAtFixedRate(new Runnable()
 {
 public void run()
 {
 if (!isCompacted.get())
 {
 meterSyncThrottle.acquire();
 SystemKeyspace.persistSSTableReadMeter(desc.ksname, desc.cfname, 
 desc.generation, readMeter);
 }
 }
 }, 1, 5, TimeUnit.MINUTES);
 {code}
 I do not have have to the environment right now, but I could provide a 
 threaddump later if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8066) High Heap Consumption due to high number of SSTableReader

2014-10-13 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14169740#comment-14169740
 ] 

T Jake Luciani commented on CASSANDRA-8066:
---

bq. Do you need more details?

Yes actual data size and number of sstables in the largest keyspace

 High Heap Consumption due to high number of SSTableReader
 -

 Key: CASSANDRA-8066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8066
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
Reporter: Benoit Lacelle
Assignee: T Jake Luciani
 Fix For: 2.1.1


 Given a workload with quite a lot of reads, I recently encountered high heap 
 memory consumption. Given 2GB of Heap, it appears I have 750.000+ tasks in 
 SSTableReader.syncExecutor, consuming more than 1.2GB. These tasks have type 
 SSTableReader$5, which I guess corresponds to :
 {code}
 readMeterSyncFuture = syncExecutor.scheduleAtFixedRate(new Runnable()
 {
 public void run()
 {
 if (!isCompacted.get())
 {
 meterSyncThrottle.acquire();
 SystemKeyspace.persistSSTableReadMeter(desc.ksname, desc.cfname, 
 desc.generation, readMeter);
 }
 }
 }, 1, 5, TimeUnit.MINUTES);
 {code}
 I do not have have to the environment right now, but I could provide a 
 threaddump later if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8066) High Heap Consumption due to high number of SSTableReader

2014-10-13 Thread Benoit Lacelle (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14169775#comment-14169775
 ] 

Benoit Lacelle commented on CASSANDRA-8066:
---

Here is the output for nodetool cfstats

{code}
Keyspace: prod_7
Read Count: 1424130
Read Latency: 3.122707294980093 ms.
Write Count: 8808265
Write Latency: 0.03234213866181365 ms.
Pending Flushes: 0
Table: alerts
SSTable count: 2
Space used (live), bytes: 10237
Space used (total), bytes: 10237
Space used by snapshots (total), bytes: 0
SSTable Compression Ratio: 0.6354418105671432
Memtable cell count: 0
Memtable data size, bytes: 0
Memtable switch count: 0
Local read count: 0
Local read latency: NaN ms
Local write count: 0
Local write latency: NaN ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 32
Compacted partition minimum bytes: 259
Compacted partition maximum bytes: 372
Compacted partition mean bytes: 341
Average live cells per slice (last five minutes): 0.0
Average tombstones per slice (last five minutes): 0.0

Table: details
SSTable count: 103
Space used (live), bytes: 578266489
Space used (total), bytes: 578266489
Space used by snapshots (total), bytes: 0
SSTable Compression Ratio: 0.724988344517149
Memtable cell count: 67212
Memtable data size, bytes: 18468770
Memtable switch count: 23
Local read count: 6
Local read latency: 10.742 ms
Local write count: 2036971
Local write latency: 0.017 ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 5136
Compacted partition minimum bytes: 87
Compacted partition maximum bytes: 129557750
Compacted partition mean bytes: 2595076
Average live cells per slice (last five minutes): 
0.
Average tombstones per slice (last five minutes): 0.0

Table: domains
SSTable count: 21
Space used (live), bytes: 122407
Space used (total), bytes: 122407
Space used by snapshots (total), bytes: 0
SSTable Compression Ratio: 0.5848437821906775
Memtable cell count: 238238
Memtable data size, bytes: 2793
Memtable switch count: 1
Local read count: 60281
Local read latency: 0.162 ms
Local write count: 402903
Local write latency: 0.012 ms
Pending flushes: 0
Bloom filter false positives: 25
Bloom filter false ratio: 0.08929
Bloom filter space used, bytes: 664
Compacted partition minimum bytes: 87
Compacted partition maximum bytes: 372
Compacted partition mean bytes: 171
Average live cells per slice (last five minutes): 
0.9985401459854014
Average tombstones per slice (last five minutes): 0.0

Table: domains_statistics
SSTable count: 7
Space used (live), bytes: 302413
Space used (total), bytes: 302413
Space used by snapshots (total), bytes: 0
SSTable Compression Ratio: 0.6144160676068052
Memtable cell count: 849893
Memtable data size, bytes: 42569
Memtable switch count: 1
Local read count: 60511
Local read latency: 0.141 ms
Local write count: 1055892
Local write latency: 0.013 ms
Pending flushes: 0
Bloom filter false positives: 0
Bloom filter false ratio: 0.0
Bloom filter space used, bytes: 416
Compacted partition minimum bytes: 87
Compacted partition maximum bytes: 24601
Compacted partition mean bytes: 3795
Average live cells per slice (last five minutes): 
0.9894160583941606
Average tombstones per slice (last five minutes): 0.0

Table: latest_urls
SSTable count: 3
Space used (live), bytes: 10239518
   

[jira] [Commented] (CASSANDRA-8066) High Heap Consumption due to high number of SSTableReader

2014-10-13 Thread Jason Brown (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14169938#comment-14169938
 ] 

Jason Brown commented on CASSANDRA-8066:


+1

 High Heap Consumption due to high number of SSTableReader
 -

 Key: CASSANDRA-8066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8066
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
Reporter: Benoit Lacelle
Assignee: T Jake Luciani
 Fix For: 2.1.1

 Attachments: 8066.txt


 Given a workload with quite a lot of reads, I recently encountered high heap 
 memory consumption. Given 2GB of Heap, it appears I have 750.000+ tasks in 
 SSTableReader.syncExecutor, consuming more than 1.2GB. These tasks have type 
 SSTableReader$5, which I guess corresponds to :
 {code}
 readMeterSyncFuture = syncExecutor.scheduleAtFixedRate(new Runnable()
 {
 public void run()
 {
 if (!isCompacted.get())
 {
 meterSyncThrottle.acquire();
 SystemKeyspace.persistSSTableReadMeter(desc.ksname, desc.cfname, 
 desc.generation, readMeter);
 }
 }
 }, 1, 5, TimeUnit.MINUTES);
 {code}
 I do not have have to the environment right now, but I could provide a 
 threaddump later if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-8066) High Heap Consumption due to high number of SSTableReader

2014-10-09 Thread T Jake Luciani (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14165441#comment-14165441
 ] 

T Jake Luciani commented on CASSANDRA-8066:
---

I suspect the early opening and re-opening of  sstables during compaction is 
creating many entries in the queue.

We should perhaps postpone this read meter tracking until the sstable has been 
fully written.



 High Heap Consumption due to high number of SSTableReader
 -

 Key: CASSANDRA-8066
 URL: https://issues.apache.org/jira/browse/CASSANDRA-8066
 Project: Cassandra
  Issue Type: Bug
  Components: Core
 Environment: Cassandra 2.1.0
Reporter: Benoit Lacelle
Assignee: T Jake Luciani
 Fix For: 2.1.1


 Given a workload with quite a lot of reads, I recently encountered high heap 
 memory consumption. Given 2GB of Heap, it appears I have 750.000+ tasks in 
 SSTableReader.syncExecutor, consuming more than 1.2GB. These tasks have type 
 SSTableReader$5, which I guess corresponds to :
 {code}
 readMeterSyncFuture = syncExecutor.scheduleAtFixedRate(new Runnable()
 {
 public void run()
 {
 if (!isCompacted.get())
 {
 meterSyncThrottle.acquire();
 SystemKeyspace.persistSSTableReadMeter(desc.ksname, desc.cfname, 
 desc.generation, readMeter);
 }
 }
 }, 1, 5, TimeUnit.MINUTES);
 {code}
 I do not have have to the environment right now, but I could provide a 
 threaddump later if necessary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)