[ https://issues.apache.org/jira/browse/CASSANDRA-3065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13088282#comment-13088282 ]
Benjamin Schrauwen commented on CASSANDRA-3065: ----------------------------------------------- I already had to recover the nodes in the cluster, but ran scrub on a local cassandra using the corrupt file I saved. I get these exceptions when running scrub: WARN 18:29:51,454 Row at 29174 is unreadable; skipping to next DEBUG 18:29:51,455 Reading row at 29174 DEBUG 18:29:51,455 row 01 is 288230376420147200 bytes WARN 18:29:51,455 Non-fatal error reading row (stacktrace follows) java.io.IOError: java.io.IOException: Impossible row size 288230376420147200 at org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:717) at org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:631) at org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:65) at org.apache.cassandra.db.compaction.CompactionManager$3.call(CompactionManager.java:251) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:680) Caused by: java.io.IOException: Impossible row size 288230376420147200 ... 9 more WARN 18:29:51,580 Non-fatal error reading row (stacktrace follows) java.io.IOError: java.io.EOFException: bloom filter claims to be -779103867 bytes, longer than entire row size -3408150975012023934 at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:149) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:90) at org.apache.cassandra.db.compaction.CompactionManager.scrubOne(CompactionManager.java:718) at org.apache.cassandra.db.compaction.CompactionManager.doScrub(CompactionManager.java:631) at org.apache.cassandra.db.compaction.CompactionManager.access$600(CompactionManager.java:65) at org.apache.cassandra.db.compaction.CompactionManager$3.call(CompactionManager.java:251) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:680) Caused by: java.io.EOFException: bloom filter claims to be -779103867 bytes, longer than entire row size -3408150975012023934 at org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:111) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:119) ... 10 more > Major file corruption after running nodetool cleanup > ---------------------------------------------------- > > Key: CASSANDRA-3065 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3065 > Project: Cassandra > Issue Type: Bug > Components: Core > Affects Versions: 0.8.3 > Reporter: Benjamin Schrauwen > > After running nodetool cleanup on two of the nodes in my 4 node cluster, > almost all SSTables on those those machine got corrupted. I am not able to > read them anymore with sstable2json, and the cassandra daemon is repetitively > throwing: > ERROR [ReadStage:11] 2011-08-20 04:44:46,846 AbstractCassandraDaemon.java > (line 139) Fatal exception in thread Thread[ReadStage:11,5,main] > java.lang.RuntimeException: java.lang.IndexOutOfBoundsException > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34) > at > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > at java.lang.Thread.run(Thread.java:619) > Caused by: java.lang.IndexOutOfBoundsException > at java.nio.Buffer.checkIndex(Buffer.java:514) > at java.nio.DirectByteBuffer.get(DirectByteBuffer.java:209) > at > org.apache.cassandra.io.util.MappedFileDataInput.read(MappedFileDataInput.java:104) > at java.io.InputStream.read(InputStream.java:154) > at > org.apache.cassandra.io.util.AbstractDataInput.readInt(AbstractDataInput.java:196) > at > org.apache.cassandra.io.sstable.IndexHelper.skipIndex(IndexHelper.java:61) > at > org.apache.cassandra.db.columniterator.SimpleSliceReader.<init>(SimpleSliceReader.java:58) > at > org.apache.cassandra.db.columniterator.SSTableSliceIterator.createReader(SSTableSliceIterator.java:91) > at > org.apache.cassandra.db.columniterator.SSTableSliceIterator.<init>(SSTableSliceIterator.java:67) > at > org.apache.cassandra.db.filter.SliceQueryFilter.getSSTableColumnIterator(SliceQueryFilter.java:66) > at > org.apache.cassandra.db.filter.QueryFilter.getSSTableColumnIterator(QueryFilter.java:80) > at > org.apache.cassandra.db.ColumnFamilyStore.getTopLevelColumns(ColumnFamilyStore.java:1314) > at > org.apache.cassandra.db.ColumnFamilyStore.cacheRow(ColumnFamilyStore.java:1181) > at > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1221) > at > org.apache.cassandra.db.ColumnFamilyStore.getColumnFamily(ColumnFamilyStore.java:1168) > at org.apache.cassandra.db.Table.getRow(Table.java:385) > at > org.apache.cassandra.db.SliceByNamesReadCommand.getRow(SliceByNamesReadCommand.java:58) > at > org.apache.cassandra.service.StorageProxy$LocalReadRunnable.runMayThrow(StorageProxy.java:641) > at > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira