[ 
https://issues.apache.org/jira/browse/CASSANDRA-3370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sylvain Lebresne updated CASSANDRA-3370:
----------------------------------------

    Attachment: 3370.patch

You test did help. Turns out that's because you're inserting random and thus 
basically uncompressible data, and the compressed data was bigger than the 
uncompressed one. The code is supposed to handle that but there is a bug in 
that part.

Patch attached to fix.
                
> Deflate Compression corrupts SSTables
> -------------------------------------
>
>                 Key: CASSANDRA-3370
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-3370
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.0.0
>         Environment: Ubuntu Linux, amd64, Cassandra 1.0.0-rc2
>            Reporter: Christian Spriegel
>            Assignee: Sylvain Lebresne
>         Attachments: 3370.patch, Test.zip, system.log
>
>
> Hi,
> it seems that the Deflate Compressor corrupts the SSTables. 3 out of 3 
> Installations were corrupt. Snappy works fine.
> Here is what I did:
> 1. Start a single cassandra node (I was using ByteOrderedPartitioner)
> 2. Write data into cf that uses deflate compression - I think it has to be 
> enough data so that the data folder contains some files.
> 3. When I now try to read (I did a range scan) from my application, it fails 
> and the logs show corruptions:
> Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/home/cspriegel/Development/cassandra1/data/Test/Response-h-2-Data.db): 
> corruption detected, chunk at 0 of length 65536.
> regards,
> Christian

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to