[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-13 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13433944#comment-13433944
 ] 

Cathy Daw commented on CASSANDRA-4538:
--

I tried lots of permutations and could not reproduce.
Can you verify if this consistently reproducible for you?
Here are my repro tests

{code}
// Test Setup
* Modify: InsertThread.java to change host IP address
* Run: mvn install
* Start: cassandra 1.1.4

// Test Run
* Test Setup:  create / modify KS and CF below
* Run test: mvn exec:java -Dexec.mainClass="com.test.CreateTestData"

// *** cassandra-cli ***
create keyspace ST with
  placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy'
  and strategy_options = {replication_factor:1};
  
use ST;

// Test #1: SizeTieredCompactionStrategy
create column family company;

// Test #2: SizeTieredCompactionStrategy and 1mb sstables
drop column family company;
create column family company with 
and compaction_strategy=SizeTieredCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 1};

// Test #3: SizeTieredCompactionStrategy and 100mb sstables
drop column family company;
create column family company with 
and compaction_strategy=SizeTieredCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 100};


// Test #4: LeveledCompactionStrategy and 10mb sstables
drop column family company;
create column family company 
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 10};

// Test #5: LeveledCompactionStrategy and 1mb sstables
drop column family company;
create column family company 
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 1};

// Test #6: LeveledCompactionStrategy and 100mb sstables
drop column family company;
create column family company 
and compaction_strategy=LeveledCompactionStrategy
and compaction_strategy_options={sstable_size_in_mb: 100};

// ADDITIONAL TESTS VIA JAVA STRESS
[default@ST] drop keyspace Keyspace1;
./cassandra-stress --operation=INSERT --num-keys=10 
--num-different-keys=2 --columns=2 --threads=2 
--compression=SnappyCompressor --compaction-strategy=LeveledCompactionStrategy 
--column-size=2
./cassandra-stress --operation=READ --num-keys=10 
--num-different-keys=2 --columns=2 --threads=2 
--compression=SnappyCompressor --compaction-strategy=LeveledCompactionStrategy 
--column-size=2


// Distructive test: check nodetool -h localhost compactionstats and run the 
following while there are pending compactions
./cassandra-stress --operation=INSERT --num-keys=1000 --num-different-keys=100 
--columns=2 --threads=2 --compression=SnappyCompressor 
--compaction-strategy=LeveledCompactionStrategy --column-size=2

// Tried with SizeTieredCompactionStrategy
[default@ST] drop keyspace Keyspace1;
./cassandra-stress --operation=INSERT --num-keys=6 
--num-different-keys=2 --columns=2 --compression=SnappyCompressor 
--compaction-strategy=SizeTieredCompactionStrategy --column-size=2
./cassandra-stress --operation=READ --num-keys=6 --num-different-keys=2 
--columns=2 --compression=SnappyCompressor 
--compaction-strategy=SizeTieredCompactionStrategy --column-size=2

// Distructive test: check nodetool -h localhost compactionstats and kill the 
c* server while compactions are in progress and then restart

{code}

> Strange CorruptedBlockException when massive insert binary data
> ---
>
> Key: CASSANDRA-4538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.1.3
> Environment: Debian sequeeze 32bit
>Reporter: Tommy Cheng
>Priority: Critical
>  Labels: CorruptedBlockException, binary, insert
> Attachments: cassandra-stresstest.zip
>
>
> After inserting ~ 1 records, here is the error log
>  INFO 10:53:33,543 Compacted to 
> [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,].
>   407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 
> 0.715926MB/s.  Time: 545ms.
> ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
> java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:99)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
>   

[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-14 Thread Tommy Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13433951#comment-13433951
 ] 

Tommy Cheng commented on CASSANDRA-4538:


Yes, consistently reproducible.
Funny thing is that another machine is okay.
I tried format the OS, health check the ram/harddisk, and all test pass.

What extra thing should i provide?
It is very important to find out the problem before really use cassandra for 
production.

> Strange CorruptedBlockException when massive insert binary data
> ---
>
> Key: CASSANDRA-4538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.1.3
> Environment: Debian sequeeze 32bit
>Reporter: Tommy Cheng
>Priority: Critical
>  Labels: CorruptedBlockException, binary, insert
> Attachments: cassandra-stresstest.zip
>
>
> After inserting ~ 1 records, here is the error log
>  INFO 10:53:33,543 Compacted to 
> [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,].
>   407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 
> 0.715926MB/s.  Time: 545ms.
> ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
> java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:99)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:68)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:118)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:173)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:98)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:77)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:302)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
> at 
> org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:119)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:144)
> at 
> org.apache.cas

[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-14 Thread Cathy Daw (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13434333#comment-13434333
 ] 

Cathy Daw commented on CASSANDRA-4538:
--

I tried to reproduce on a 32-bit debian squeeze medium instance on EC2 and 
could not get the error.  I wonder if you are dealing with a permanently 
corrupted as the result of a intermittent bug.  Can you drop this column family 
and keyspace, recreate them, and then re-run the test?  Can you also paste the 
DDL to create the column family?

> Strange CorruptedBlockException when massive insert binary data
> ---
>
> Key: CASSANDRA-4538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.1.3
> Environment: Debian sequeeze 32bit
>Reporter: Tommy Cheng
>Priority: Critical
>  Labels: CorruptedBlockException, binary, insert
> Attachments: cassandra-stresstest.zip
>
>
> After inserting ~ 1 records, here is the error log
>  INFO 10:53:33,543 Compacted to 
> [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,].
>   407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 
> 0.715926MB/s.  Time: 545ms.
> ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
> java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:99)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:68)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:118)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:173)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:98)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:77)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:302)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
> at 
> org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:119)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns

[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-14 Thread Tommy Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13434770#comment-13434770
 ] 

Tommy Cheng commented on CASSANDRA-4538:


Yes, i tried recreate keyspace, column family already. I also tried delete 
/var/lib/cassandra, format debian (to make sure it is the same setting with 
another PC). The problem is still here, so i think may be my particular 
hardware cause the problem.

The DDL is included in cassandra-stresstest\schema\schema-stresstest.txt

> Strange CorruptedBlockException when massive insert binary data
> ---
>
> Key: CASSANDRA-4538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.1.3
> Environment: Debian sequeeze 32bit
>Reporter: Tommy Cheng
>Priority: Critical
>  Labels: CorruptedBlockException, binary, insert
> Attachments: cassandra-stresstest.zip
>
>
> After inserting ~ 1 records, here is the error log
>  INFO 10:53:33,543 Compacted to 
> [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,].
>   407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 
> 0.715926MB/s.  Time: 545ms.
> ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
> java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:99)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:68)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:118)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:173)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:98)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:77)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:302)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
> at 
> org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:119)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamil

[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-14 Thread Tommy Cheng (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13434772#comment-13434772
 ] 

Tommy Cheng commented on CASSANDRA-4538:


You may try to run the test in eclipse, it is a eclipse project.

> Strange CorruptedBlockException when massive insert binary data
> ---
>
> Key: CASSANDRA-4538
> URL: https://issues.apache.org/jira/browse/CASSANDRA-4538
> Project: Cassandra
>  Issue Type: Bug
>Affects Versions: 1.1.3
> Environment: Debian sequeeze 32bit
>Reporter: Tommy Cheng
>Priority: Critical
>  Labels: CorruptedBlockException, binary, insert
> Attachments: cassandra-stresstest.zip
>
>
> After inserting ~ 1 records, here is the error log
>  INFO 10:53:33,543 Compacted to 
> [/var/lib/cassandra/data/ST/company/ST-company.company_acct_no_idx-he-13-Data.db,].
>   407,681 to 409,133 (~100% of original) bytes for 9,250 keys at 
> 0.715926MB/s.  Time: 545ms.
> ERROR 10:53:35,445 Exception in thread Thread[CompactionExecutor:3,1,main]
> java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:99)
> at 
> org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83)
> at 
> org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:68)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:118)
> at 
> org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
> at 
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
> at 
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
> at 
> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:173)
> at 
> org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
> at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> at java.util.concurrent.FutureTask.run(FutureTask.java:138)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: 
> (/var/lib/cassandra/data/ST/company/ST-company-he-9-Data.db): corruption 
> detected, chunk at 7530128 of length 19575.
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:98)
> at 
> org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:77)
> at 
> org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:302)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:397)
> at java.io.RandomAccessFile.readFully(RandomAccessFile.java:377)
> at 
> org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
> at 
> org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:119)
> at 
> org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
> at 
> org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:144)
> at 
> org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:234)
> at 
> org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:112)
>

[jira] [Commented] (CASSANDRA-4538) Strange CorruptedBlockException when massive insert binary data

2012-08-30 Thread Christian Schnidrig (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13444765#comment-13444765
 ] 

Christian Schnidrig commented on CASSANDRA-4538:


I'm affraid, I ran into the same bug with version 1.1.4:

INFO [CompactionExecutor:1137] 2012-08-29 16:24:14,005 CompactionTask.java 
(line 109) Compacting 
[SSTableReader(path='/mnt/md0/cassandra/data/content/oneChunkFileData/content-oneChunkFileData-he-6698-Data.db'),
 SSTableReader(path='/mnt/md0/cassandra/data/content/oneChun
kFileData/content-oneChunkFileData-he-6697-Data.db'), 
SSTableReader(path='/mnt/md0/cassandra/data/content/oneChunkFileData/content-oneChunkFileData-he-6696-Data.db'),
 
SSTableReader(path='/mnt/md0/cassandra/data/content/oneChunkFileData/content-oneChunkFileData-he-6889-Da
ta.db'), 
SSTableReader(path='/mnt/md0/cassandra/data/content/oneChunkFileData/content-oneChunkFileData-he-7053-Data.db')]

ERROR [CompactionExecutor:1137] 2012-08-29 16:24:14,712 
AbstractCassandraDaemon.java (line 134) Exception in thread 
Thread[CompactionExecutor:1137,1,main]
java.io.IOError: org.apache.cassandra.io.compress.CorruptedBlockException: 
(/mnt/md0/cassandra/data/content/oneChunkFileData/content-oneChunkFileData-he-6889-Data.db):
 corruption detected, chunk at 262155 of length 65545.
at 
org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:116)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.(PrecompactedRow.java:99)
at 
org.apache.cassandra.db.compaction.CompactionController.getCompactedRow(CompactionController.java:176)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:83)
at 
org.apache.cassandra.db.compaction.CompactionIterable$Reducer.getReduced(CompactionIterable.java:68)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:118)
at 
org.apache.cassandra.utils.MergeIterator$ManyToOne.computeNext(MergeIterator.java:101)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
at com.google.common.collect.Iterators$7.computeNext(Iterators.java:614)
at 
com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:140)
at 
com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:135)
at 
org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:173)
at 
org.apache.cassandra.db.compaction.LeveledCompactionTask.execute(LeveledCompactionTask.java:50)
at 
org.apache.cassandra.db.compaction.CompactionManager$1.runMayThrow(CompactionManager.java:154)
at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
at java.util.concurrent.FutureTask.run(FutureTask.java:166)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: org.apache.cassandra.io.compress.CorruptedBlockException: 
(/mnt/md0/cassandra/data/content/oneChunkFileData/content-oneChunkFileData-he-6889-Data.db):
 corruption detected, chunk at 262155 of length 65545.
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.decompressChunk(CompressedRandomAccessReader.java:98)
at 
org.apache.cassandra.io.compress.CompressedRandomAccessReader.reBuffer(CompressedRandomAccessReader.java:77)
at 
org.apache.cassandra.io.util.RandomAccessReader.read(RandomAccessReader.java:302)
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:414)
at java.io.RandomAccessFile.readFully(RandomAccessFile.java:394)
at 
org.apache.cassandra.utils.BytesReadTracker.readFully(BytesReadTracker.java:95)
at 
org.apache.cassandra.utils.ByteBufferUtil.read(ByteBufferUtil.java:401)
at 
org.apache.cassandra.utils.ByteBufferUtil.readWithLength(ByteBufferUtil.java:363)
at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:119)
at 
org.apache.cassandra.db.ColumnSerializer.deserialize(ColumnSerializer.java:36)
at 
org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:144)
at 
org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:234)
at 
org.apache.cassandra.db.compaction.PrecompactedRow.merge(PrecompactedRow.java:112)
... 21 more

-
This happend on a CF with binary data. (