java.lang.NegativeArraySizeException during compacting large row
----------------------------------------------------------------

                 Key: CASSANDRA-3095
                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3095
             Project: Cassandra
          Issue Type: Bug
          Components: Core
    Affects Versions: 0.8.4
         Environment: Linux 2.6.26-2-amd64 #1 SMP Thu Feb 11 00:59:32 UTC 2010 
x86_64 GNU/Linux
JDK 1.6.0_27 (Java 6 update 27), with JNA.
            Reporter: Pas


Hello,

It's a 4 node ring, 3 on 0.7.4, I've upgraded one to 0.8.4. This particular 
node was having issues with compaction that's why I've tried the upgrade (it 
looks likely that this solved the compaction issues).

Here's the stack trace from system.log.

 INFO [CompactionExecutor:22] 2011-08-28 18:12:46,566 CompactionController.java 
(line 136) Compacting large row  (36028797018963968 bytes) incrementally
ERROR [CompactionExecutor:22] 2011-08-28 18:12:46,609 
AbstractCassandraDaemon.java (line 134) Fatal exception in thread 
Thread[CompactionExecutor:22,1,main]
java.lang.NegativeArraySizeException
        at org.apache.cassandra.utils.obs.OpenBitSet.<init>(OpenBitSet.java:85)
        at 
org.apache.cassandra.utils.BloomFilter.bucketsFor(BloomFilter.java:56)
        at org.apache.cassandra.utils.BloomFilter.getFilter(BloomFilter.java:73)
        at 
org.apache.cassandra.db.ColumnIndexer.serializeInternal(ColumnIndexer.java:62)
        at 
org.apache.cassandra.db.ColumnIndexer.serialize(ColumnIndexer.java:50)
        at 
org.apache.cassandra.db.compaction.LazilyCompactedRow.<init>(LazilyCompactedRow.java:89)
        at 
org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:138)
        at 
org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:123)
        at 
org.apache.cassandra.db.compaction.CompactionIterator.getReduced(CompactionIterator.java:43)
        at 
org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:74)
        at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
        at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
        at 
org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
        at 
org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
        at 
org.apache.cassandra.db.compaction.CompactionManager.doCompactionWithoutSizeEstimation(CompactionManager.java:569)
        at 
org.apache.cassandra.db.compaction.CompactionManager.doCompaction(CompactionManager.java:506)
        at 
org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:141)
        at 
org.apache.cassandra.db.compaction.CompactionManager$1.call(CompactionManager.java:107)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)


We've ~70 files still in "f" format. And 80 in "g". We've ~100 GB of data on 
this node.

Thanks.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to