[ 
https://issues.apache.org/jira/browse/CASSANDRA-2296?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13004585#comment-13004585
 ] 

Hudson commented on CASSANDRA-2296:
-----------------------------------

Integrated in Cassandra-0.7 #365 (See 
[https://hudson.apache.org/hudson/job/Cassandra-0.7/365/])
    avoid writing empty rows when scrubbing tombstoned rows
patch by jbellis; reviewed by slebresne for CASSANDRA-2296


> Scrub resulting in "bloom filter claims to be longer than entire row size" 
> error
> --------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-2296
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-2296
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>    Affects Versions: 0.7.3
>            Reporter: Jason Harvey
>            Assignee: Jonathan Ellis
>             Fix For: 0.7.4
>
>         Attachments: 2296.txt, sstable_part1.tar.bz2, sstable_part2.tar.bz2
>
>
> Doing a scrub on a node which I upgraded from 0.7.1 (was previously 0.6.8) to 
> 0.7.3. Getting this error multiple times:
> {code}
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,513 CompactionManager.java 
> (line 625) Row is unreadable; skipping to next
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,514 CompactionManager.java 
> (line 599) Non-fatal error reading row (stacktrace follows)
> java.io.IOError: java.io.EOFException: bloom filter claims to be longer than 
> entire row size
>         at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:117)
>         at 
> org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:590)
>         at 
> org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56)
>         at 
> org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195)
>         at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.EOFException: bloom filter claims to be longer than entire 
> row size
>         at 
> org.apache.cassandra.io.sstable.IndexHelper.defreezeBloomFilter(IndexHelper.java:113)
>         at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.<init>(SSTableIdentityIterator.java:87)
>         ... 8 more
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:52,515 CompactionManager.java 
> (line 625) Row is unreadable; skipping to next
>  INFO [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java 
> (line 637) Scrub of 
> SSTableReader(path='/cassandra/data/reddit/Hide-f-671-Data.db') complete: 
> 254709 rows in new sstable
>  WARN [CompactionExecutor:1] 2011-03-08 18:33:53,777 CompactionManager.java 
> (line 639) Unable to recover 1630 that were skipped.  You can attempt manual 
> recovery from the pre-scrub snapshot.  You can also run nodetool repair to 
> transfer the data from a healthy replica, if any
> {code}

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to