[ https://issues.apache.org/jira/browse/CASSANDRA-11063?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15115648#comment-15115648 ]
Navjyot Nishant commented on CASSANDRA-11063: --------------------------------------------- Thanks Marcus.. Yes we did identified this data modelling issue and wonking to fix this, but as you also mentioned this shouldn't fail like this anyways, do we have a workaround to fix this? Our system.log is full of this error.. Errors are unstoppable even when no compaction is running. At this point we just want to stop this spam. > Unable to compute ceiling for max when histogram overflowed > ----------------------------------------------------------- > > Key: CASSANDRA-11063 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11063 > Project: Cassandra > Issue Type: Bug > Components: Compaction > Environment: Cassandra 2.1.9 on RHEL > Reporter: Navjyot Nishant > Labels: Compaction, thread > > Issue https://issues.apache.org/jira/browse/CASSANDRA-8028 seems related with > error we are getting. But we are getting this with Cassandra 2.1.9 when > autocompaction is running it keeps throwing following errors, we are unsure > if its a bug or can be resolved, please suggest. > {code} > WARN [CompactionExecutor:3] 2016-01-23 13:30:40,907 SSTableWriter.java:240 - > Compacting large partition gccatlgsvcks/category_name_dedup:66611300 > (138152195 bytes) > ERROR [CompactionExecutor:1] 2016-01-23 13:30:50,267 CassandraDaemon.java:223 > - Exception in thread Thread[CompactionExecutor:1,1,main] > java.lang.IllegalStateException: Unable to compute ceiling for max when > histogram overflowed > at > org.apache.cassandra.utils.EstimatedHistogram.mean(EstimatedHistogram.java:203) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.io.sstable.metadata.StatsMetadata.getEstimatedDroppableTombstoneRatio(StatsMetadata.java:98) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.io.sstable.SSTableReader.getEstimatedDroppableTombstoneRatio(SSTableReader.java:1987) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.db.compaction.AbstractCompactionStrategy.worthDroppingTombstones(AbstractCompactionStrategy.java:370) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundSSTables(SizeTieredCompactionStrategy.java:96) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy.getNextBackgroundTask(SizeTieredCompactionStrategy.java:179) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.db.compaction.WrappingCompactionStrategy.getNextBackgroundTask(WrappingCompactionStrategy.java:84) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionCandidate.run(CompactionManager.java:230) > ~[apache-cassandra-2.1.9.jar:2.1.9] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[na:1.7.0_51] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > ~[na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > ~[na:1.7.0_51] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > [na:1.7.0_51] > at java.lang.Thread.run(Thread.java:744) [na:1.7.0_51] > {code} > h3. Additional info: > *cfstats is running fine for that table...* > {code} > ~ $ nodetool cfstats gccatlgsvcks.category_name_dedup > Keyspace: gccatlgsvcks > Read Count: 0 > Read Latency: NaN ms. > Write Count: 0 > Write Latency: NaN ms. > Pending Flushes: 0 > Table: category_name_dedup > SSTable count: 6 > Space used (live): 836314727 > Space used (total): 836314727 > Space used by snapshots (total): 3621519 > Off heap memory used (total): 6930368 > SSTable Compression Ratio: 0.03725358753117693 > Number of keys (estimate): 3004 > Memtable cell count: 0 > Memtable data size: 0 > Memtable off heap memory used: 0 > Memtable switch count: 0 > Local read count: 0 > Local read latency: NaN ms > Local write count: 0 > Local write latency: NaN ms > Pending flushes: 0 > Bloom filter false positives: 0 > Bloom filter false ratio: 0.00000 > Bloom filter space used: 5240 > Bloom filter off heap memory used: 5192 > Index summary off heap memory used: 1200 > Compression metadata off heap memory used: 6923976 > Compacted partition minimum bytes: 125 > Compacted partition maximum bytes: 30753941057 > Compacted partition mean bytes: 8352388 > Average live cells per slice (last five minutes): 0.0 > Maximum live cells per slice (last five minutes): 0.0 > Average tombstones per slice (last five minutes): 0.0 > Maximum tombstones per slice (last five minutes): 0.0 > {code} > *cfhistograms is also running fine...* > {code} > ~ $ nodetool cfhistograms gccatlgsvcks category_name_dedup > gccatlgsvcks/category_name_dedup histograms > Percentile SSTables Write Latency Read Latency Partition Size > Cell Count > (micros) (micros) (bytes) > 50% 0.00 0.00 0.00 1109 > 20 > 75% 0.00 0.00 0.00 2299 > 42 > 95% 0.00 0.00 0.00 11864 > 215 > 98% 0.00 0.00 0.00 35425 > 642 > 99% 0.00 0.00 0.00 51012 > 924 > Min 0.00 0.00 0.00 125 > 4 > Max 0.00 0.00 0.00 30753941057 > 268650950 > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)