[ https://issues.apache.org/jira/browse/CASSANDRA-3370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13130001#comment-13130001 ]
Christian Spriegel commented on CASSANDRA-3370: ----------------------------------------------- I tested again with 1.0.0. Unfortunetaly the problem still exists. But I think I was able to narrow it down: It seems that the problem only occurs when I insert large byte-arrays. It seems to work fine with 10kb arrays, no problem there. I was able to repeatedly insert and read. With 100kb or 200kb arrays it crashes after about 1000-2000 insertions. (Insertions work, but range scan afterwards crashes) > Deflate Compression corrupts SSTables > ------------------------------------- > > Key: CASSANDRA-3370 > URL: https://issues.apache.org/jira/browse/CASSANDRA-3370 > Project: Cassandra > Issue Type: Bug > Components: Core > Affects Versions: 1.0.0 > Environment: Ubuntu Linux, amd64, Cassandra 1.0.0-rc2 > Reporter: Christian Spriegel > Assignee: Sylvain Lebresne > Attachments: system.log > > > Hi, > it seems that the Deflate Compressor corrupts the SSTables. 3 out of 3 > Installations were corrupt. Snappy works fine. > Here is what I did: > 1. Start a single cassandra node (I was using ByteOrderedPartitioner) > 2. Write data into cf that uses deflate compression - I think it has to be > enough data so that the data folder contains some files. > 3. When I now try to read (I did a range scan) from my application, it fails > and the logs show corruptions: > Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: > (/home/cspriegel/Development/cassandra1/data/Test/Response-h-2-Data.db): > corruption detected, chunk at 0 of length 65536. > regards, > Christian -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira