[ 
https://issues.apache.org/jira/browse/CASSANDRA-1717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13081072#comment-13081072
 ] 

Sylvain Lebresne commented on CASSANDRA-1717:
---------------------------------------------

Comments:
* CSW.flushData() forgot to reset the checksum (this is caught by the unit 
tests btw).
* We should convert the CRC32 to an int (and only write that) as it is an int 
internally (getValue() returns a long only because CRC32 implements the 
interface Checksum that require that).
* Here we checksum the compressed data. The other approach would be to checksum 
the uncompressed data. The advantage of checksumming compressed data is the 
speed (less data to checksum), but checksumming the uncompressed data would be 
a little bit safer. In particular, it would prevent us from messing up in the 
decompression (and we don't have to trust the compression algorithm, not that I 
don't trust Snappy, but...). This is a clearly a trade-off that we have to 
make, but I admit that my personal preference would lean towards safety (in 
particular, I know that checksumming the uncompressed data give a bit more 
safety, I don't know what is our exact gain quantitatively with checksumming 
compressed data). On the other side, checksumming the uncompressed data would 
likely mean that a good part of the bitrot would result in a decompression 
error rather than a checksum error, which is maybe less convenient from the 
implementation point of view. So I don't know, I guess I'm thinking aloud to 
have other's opinions more than anything else.
* Let's add some unit tests. At least it's relatively easy to write a few 
blocks, switch one bit in the resulting file, and checking this is caught at 
read time (or better, do that multiple time changing a different bit each time).
* As Todd noted, HADOOP-6148 contains a bunch of discussions on the efficiency 
of java CRC32. In particular, it seems they have been able to close to double 
the speed of the CRC32, with a solution that seems fairly simple to me. It 
would be ok to use java native CRC32 and leave the improvement to another 
ticket, but quite frankly if it is that simple and since the hadoop guys have 
done all the hard work for us, I say we start with the efficient version 
directly.


> Cassandra cannot detect corrupt-but-readable column data
> --------------------------------------------------------
>
>                 Key: CASSANDRA-1717
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-1717
>             Project: Cassandra
>          Issue Type: New Feature
>          Components: Core
>            Reporter: Jonathan Ellis
>            Assignee: Pavel Yaskevich
>             Fix For: 1.0
>
>         Attachments: CASSANDRA-1717.patch, checksums.txt
>
>
> Most corruptions of on-disk data due to bitrot render the column (or row) 
> unreadable, so the data can be replaced by read repair or anti-entropy.  But 
> if the corruption keeps column data readable we do not detect it, and if it 
> corrupts to a higher timestamp value can even resist being overwritten by 
> newer values.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira


Reply via email to