[jira] [Commented] (CASSANDRA-8140) Compaction has no effects
[ https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177242#comment-14177242 ] Davide commented on CASSANDRA-8140: --- Oh, thank you!!! > Compaction has no effects > - > > Key: CASSANDRA-8140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8140 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Davide >Assignee: Marcus Eriksson > > Hi there, > I'm on cassandra 2.1 since then I figured out that in some circumstances (I > can't find a way to reproduce them constantly) minor compactions and full > compactions had no effects. > We are on a cluster composed of 5 nodes with around 500gb of data, no > deletions around 1.5k updates/s and same on reads. > After a repair I saw that a couple of nodes were `slow`, I investigate > further and I found that on these two nodes the number of sstable were around > 20.000+ ! We use STC. > So with node tool I triggered a full compaction, It took less than I minute > (with nothing in the logs) and of course the number of sstable didn't go down. > Then I drained the node, and I ran again with `nodetool compact`, at that > point the number of sstables went down to less than 10. > I tough was a strange spot problem. However after a week I noticed that one > node had ~100 sstabels where others just 8-10. > I ran again the compaction (It last less than a minute with nothing in logs) > and didn't change anything. I drained it and restarted then compacted and > took several hours to get it back close to 2/3 sstables. > What could be? We never incurred this behavior before. > Here informations about the table: > {code} > CREATE TABLE xyz ( > ppk text PRIMARY KEY, >.. ten more columns... > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' > AND comment = '' > AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', > 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32'} > AND compression = {'sstable_compression': > 'org.apache.cassandra.io.compress.SnappyCompressor'} > AND dclocal_read_repair_chance = 0.0 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99.0PERCENTILE'; > {code} > Here the current cf stats: > {code} > SSTable count: 11 > Space used (live), bytes: 118007220865 > Space used (total), bytes: 118007220865 > Space used by snapshots (total), bytes: 170591332257 > SSTable Compression Ratio: 0.3643916626015517 > Memtable cell count: 920306 > Memtable data size, bytes: 70034097 > Memtable switch count: 25 > Local read count: 5358772 > Local read latency: 54.621 ms > Local write count: 4715106 > Local write latency: 0.069 ms > Pending flushes: 0 > Bloom filter false positives: 53757 > Bloom filter false ratio: 0.04103 > Bloom filter space used, bytes: 220634056 > Compacted partition minimum bytes: 18 > Compacted partition maximum bytes: 61214 > Compacted partition mean bytes: 1935 > Average live cells per slice (last five minutes): > 0.8139232271871242 > Average tombstones per slice (last five minutes): > 0.5493417148555677 > {code} > Is there anything else that I can provide? > Thanks! > DD -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8140) Compaction has no effects
[ https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177231#comment-14177231 ] Davide commented on CASSANDRA-8140: --- Check here: https://gist.github.com/DAddYE/158f6b98253331dc2845 Unfortunately we had to reduce the verbosity due to very large logs. I hope it helps > Compaction has no effects > - > > Key: CASSANDRA-8140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8140 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Davide >Assignee: Marcus Eriksson > > Hi there, > I'm on cassandra 2.1 since then I figured out that in some circumstances (I > can't find a way to reproduce them constantly) minor compactions and full > compactions had no effects. > We are on a cluster composed of 5 nodes with around 500gb of data, no > deletions around 1.5k updates/s and same on reads. > After a repair I saw that a couple of nodes were `slow`, I investigate > further and I found that on these two nodes the number of sstable were around > 20.000+ ! We use STC. > So with node tool I triggered a full compaction, It took less than I minute > (with nothing in the logs) and of course the number of sstable didn't go down. > Then I drained the node, and I ran again with `nodetool compact`, at that > point the number of sstables went down to less than 10. > I tough was a strange spot problem. However after a week I noticed that one > node had ~100 sstabels where others just 8-10. > I ran again the compaction (It last less than a minute with nothing in logs) > and didn't change anything. I drained it and restarted then compacted and > took several hours to get it back close to 2/3 sstables. > What could be? We never incurred this behavior before. > Here informations about the table: > {code} > CREATE TABLE xyz ( > ppk text PRIMARY KEY, >.. ten more columns... > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' > AND comment = '' > AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', > 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32'} > AND compression = {'sstable_compression': > 'org.apache.cassandra.io.compress.SnappyCompressor'} > AND dclocal_read_repair_chance = 0.0 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99.0PERCENTILE'; > {code} > Here the current cf stats: > {code} > SSTable count: 11 > Space used (live), bytes: 118007220865 > Space used (total), bytes: 118007220865 > Space used by snapshots (total), bytes: 170591332257 > SSTable Compression Ratio: 0.3643916626015517 > Memtable cell count: 920306 > Memtable data size, bytes: 70034097 > Memtable switch count: 25 > Local read count: 5358772 > Local read latency: 54.621 ms > Local write count: 4715106 > Local write latency: 0.069 ms > Pending flushes: 0 > Bloom filter false positives: 53757 > Bloom filter false ratio: 0.04103 > Bloom filter space used, bytes: 220634056 > Compacted partition minimum bytes: 18 > Compacted partition maximum bytes: 61214 > Compacted partition mean bytes: 1935 > Average live cells per slice (last five minutes): > 0.8139232271871242 > Average tombstones per slice (last five minutes): > 0.5493417148555677 > {code} > Is there anything else that I can provide? > Thanks! > DD -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8140) Compaction has no effects
[ https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177221#comment-14177221 ] Marcus Eriksson commented on CASSANDRA-8140: could you post a bit leading up to that exception? attach the log file perhaps? > Compaction has no effects > - > > Key: CASSANDRA-8140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8140 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Davide >Assignee: Marcus Eriksson > > Hi there, > I'm on cassandra 2.1 since then I figured out that in some circumstances (I > can't find a way to reproduce them constantly) minor compactions and full > compactions had no effects. > We are on a cluster composed of 5 nodes with around 500gb of data, no > deletions around 1.5k updates/s and same on reads. > After a repair I saw that a couple of nodes were `slow`, I investigate > further and I found that on these two nodes the number of sstable were around > 20.000+ ! We use STC. > So with node tool I triggered a full compaction, It took less than I minute > (with nothing in the logs) and of course the number of sstable didn't go down. > Then I drained the node, and I ran again with `nodetool compact`, at that > point the number of sstables went down to less than 10. > I tough was a strange spot problem. However after a week I noticed that one > node had ~100 sstabels where others just 8-10. > I ran again the compaction (It last less than a minute with nothing in logs) > and didn't change anything. I drained it and restarted then compacted and > took several hours to get it back close to 2/3 sstables. > What could be? We never incurred this behavior before. > Here informations about the table: > {code} > CREATE TABLE xyz ( > ppk text PRIMARY KEY, >.. ten more columns... > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' > AND comment = '' > AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', > 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32'} > AND compression = {'sstable_compression': > 'org.apache.cassandra.io.compress.SnappyCompressor'} > AND dclocal_read_repair_chance = 0.0 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99.0PERCENTILE'; > {code} > Here the current cf stats: > {code} > SSTable count: 11 > Space used (live), bytes: 118007220865 > Space used (total), bytes: 118007220865 > Space used by snapshots (total), bytes: 170591332257 > SSTable Compression Ratio: 0.3643916626015517 > Memtable cell count: 920306 > Memtable data size, bytes: 70034097 > Memtable switch count: 25 > Local read count: 5358772 > Local read latency: 54.621 ms > Local write count: 4715106 > Local write latency: 0.069 ms > Pending flushes: 0 > Bloom filter false positives: 53757 > Bloom filter false ratio: 0.04103 > Bloom filter space used, bytes: 220634056 > Compacted partition minimum bytes: 18 > Compacted partition maximum bytes: 61214 > Compacted partition mean bytes: 1935 > Average live cells per slice (last five minutes): > 0.8139232271871242 > Average tombstones per slice (last five minutes): > 0.5493417148555677 > {code} > Is there anything else that I can provide? > Thanks! > DD -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8140) Compaction has no effects
[ https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177212#comment-14177212 ] Davide commented on CASSANDRA-8140: --- Hi Marcus, that's the only thing we have in logs: {code} CassandraDaemon.java [line 166] Exception in thread Thread[CompactionExecutor:227,1,RMI Runtime] java.util.concurrent.RejectedExecutionException: ThreadPoolExecutor has shut down at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor$1.rejectedExecution(DebuggableThreadPoolExecutor.java:61) ~[apache-cassandra-2.1.0.jar:2.1.0] at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:821) ~[na:1.7.0_25] at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1372) ~[na:1.7.0_25] at org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.execute(DebuggableThreadPoolExecutor.java:150) ~[apache-cassandra-2.1.0.jar:2.1.0] at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:110) ~[na:1.7.0_25] at org.apache.cassandra.db.ColumnFamilyStore.switchMemtable(ColumnFamilyStore.java:827) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:902) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.ColumnFamilyStore.forceFlush(ColumnFamilyStore.java:863) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.SystemKeyspace.forceBlockingFlush(SystemKeyspace.java:473) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.SystemKeyspace.finishCompaction(SystemKeyspace.java:231) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:202) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) ~[apache-cassandra-2.1.0.jar:2.1.0] at org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235) ~[apache-cassandra-2.1.0.jar:2.1.0] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) ~[na:1.7.0_25] at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) ~[na:1.7.0_25] at java.util.concurrent.FutureTask.run(FutureTask.java:166) ~[na:1.7.0_25] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_25] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_25] at java.lang.Thread.run(Thread.java:724) ~[na:1.7.0_25] {code} > Compaction has no effects > - > > Key: CASSANDRA-8140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8140 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Davide >Assignee: Marcus Eriksson > > Hi there, > I'm on cassandra 2.1 since then I figured out that in some circumstances (I > can't find a way to reproduce them constantly) minor compactions and full > compactions had no effects. > We are on a cluster composed of 5 nodes with around 500gb of data, no > deletions around 1.5k updates/s and same on reads. > After a repair I saw that a couple of nodes were `slow`, I investigate > further and I found that on these two nodes the number of sstable were around > 20.000+ ! We use STC. > So with node tool I triggered a full compaction, It took less than I minute > (with nothing in the logs) and of course the number of sstable didn't go down. > Then I drained the node, and I ran again with `nodetool compact`, at that > point the number of sstables went down to less than 10. > I tough was a strange spot problem. However after a week I noticed that one > node had ~100 sstabels where others just 8-10. > I ran again the compaction (It last less than a minute with nothing in logs) > and didn't change anything. I drained it and restarted then compacted and > took several hours to get it back close to 2/3 sstables. > What could be? We never incurred this behavior before. > Here informations about the table: > {code} > CREATE TABLE xyz ( > ppk text PRIMARY KEY, >.. ten more columns... > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' > AND comment = '' > AND compaction
[jira] [Commented] (CASSANDRA-8140) Compaction has no effects
[ https://issues.apache.org/jira/browse/CASSANDRA-8140?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176649#comment-14176649 ] Marcus Eriksson commented on CASSANDRA-8140: could you provide logs? > Compaction has no effects > - > > Key: CASSANDRA-8140 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8140 > Project: Cassandra > Issue Type: Bug > Components: Core >Reporter: Davide > > Hi there, > I'm on cassandra 2.1 since then I figured out that in some circumstances (I > can't find a way to reproduce them constantly) minor compactions and full > compactions had no effects. > We are on a cluster composed of 5 nodes with around 500gb of data, no > deletions around 1.5k updates/s and same on reads. > After a repair I saw that a couple of nodes were `slow`, I investigate > further and I found that on these two nodes the number of sstable were around > 20.000+ ! We use STC. > So with node tool I triggered a full compaction, It took less than I minute > (with nothing in the logs) and of course the number of sstable didn't go down. > Then I drained the node, and I ran again with `nodetool compact`, at that > point the number of sstables went down to less than 10. > I tough was a strange spot problem. However after a week I noticed that one > node had ~100 sstabels where others just 8-10. > I ran again the compaction (It last less than a minute with nothing in logs) > and didn't change anything. I drained it and restarted then compacted and > took several hours to get it back close to 2/3 sstables. > What could be? We never incurred this behavior before. > Here informations about the table: > {code} > CREATE TABLE xyz ( > ppk text PRIMARY KEY, >.. ten more columns... > ) WITH bloom_filter_fp_chance = 0.01 > AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}' > AND comment = '' > AND compaction = {'min_threshold': '4', 'cold_reads_to_omit': '0.0', > 'class': 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', > 'max_threshold': '32'} > AND compression = {'sstable_compression': > 'org.apache.cassandra.io.compress.SnappyCompressor'} > AND dclocal_read_repair_chance = 0.0 > AND default_time_to_live = 0 > AND gc_grace_seconds = 864000 > AND max_index_interval = 2048 > AND memtable_flush_period_in_ms = 0 > AND min_index_interval = 128 > AND read_repair_chance = 0.0 > AND speculative_retry = '99.0PERCENTILE'; > {code} > Here the current cf stats: > {code} > SSTable count: 11 > Space used (live), bytes: 118007220865 > Space used (total), bytes: 118007220865 > Space used by snapshots (total), bytes: 170591332257 > SSTable Compression Ratio: 0.3643916626015517 > Memtable cell count: 920306 > Memtable data size, bytes: 70034097 > Memtable switch count: 25 > Local read count: 5358772 > Local read latency: 54.621 ms > Local write count: 4715106 > Local write latency: 0.069 ms > Pending flushes: 0 > Bloom filter false positives: 53757 > Bloom filter false ratio: 0.04103 > Bloom filter space used, bytes: 220634056 > Compacted partition minimum bytes: 18 > Compacted partition maximum bytes: 61214 > Compacted partition mean bytes: 1935 > Average live cells per slice (last five minutes): > 0.8139232271871242 > Average tombstones per slice (last five minutes): > 0.5493417148555677 > {code} > Is there anything else that I can provide? > Thanks! > DD -- This message was sent by Atlassian JIRA (v6.3.4#6332)