[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Benedict (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benedict updated CASSANDRA-8719:

Attachment: 8719.txt

I suspect this is the bug.

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Benedict
> Fix For: 2.1.3
>
> Attachments: 8719.txt, repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request before any 
> SSTables had 

[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8719:
---
Assignee: Benedict  (was: Marcus Eriksson)

yes, hsha/offheap_objects seems to be the only combo this reproduces on (unless 
it is very non-deterministic with other setups)

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Benedict
> Fix For: 2.1.3
>
> Attachments: repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circums

[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8719:
---
Attachment: repro8719.sh

attaching script that repros this

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
> Attachments: repro8719.sh
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request befor

[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-10 Thread Marcus Eriksson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcus Eriksson updated CASSANDRA-8719:
---
Reproduced In: 2.1.0, 2.1.3  (was: 2.1.0)

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request before any 
> SSTables had been flushed yet, so I presume thi

[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-02 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8719:
---
Assignee: Marcus Eriksson

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
>Assignee: Marcus Eriksson
> Fix For: 2.1.3
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request before any 
> SSTables had been flushed yet, so I presume this is corruption 

[jira] [Updated] (CASSANDRA-8719) Using thrift HSHA with offheap_objects appears to corrupt data

2015-02-02 Thread Philip Thompson (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philip Thompson updated CASSANDRA-8719:
---
Fix Version/s: 2.1.3

> Using thrift HSHA with offheap_objects appears to corrupt data
> --
>
> Key: CASSANDRA-8719
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8719
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Randy Fradin
> Fix For: 2.1.3
>
>
> Copying my comment from CASSANDRA-6285 to a new issue since that issue is 
> long closed and I'm not sure if they are related...
> I am getting this exception using Thrift HSHA in 2.1.0:
> {quote}
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,818 CompactionTask.java 
> (line 138) Compacting 
> [SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-2-Data.db'),
>  
> SSTableReader(path='/tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-ka-1-Data.db')]
>  INFO [CompactionExecutor:8] 2015-01-26 13:32:51,890 ColumnFamilyStore.java 
> (line 856) Enqueuing flush of compactions_in_progress: 212 (0%) on-heap, 20 
> (0%) off-heap
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,892 Memtable.java (line 
> 326) Writing Memtable-compactions_in_progress@1155018639(0 serialized bytes, 
> 1 ops, 0%/0% of on/off-heap limit)
>  INFO [MemtableFlushWriter:8] 2015-01-26 13:32:51,896 Memtable.java (line 
> 360) Completed flushing 
> /tmp/cass_test/cassandra/TestCassandra/data/system/compactions_in_progress-55080ab05d9c388690a4acb25fe1f77b/system-compactions_in_progress-ka-2-Data.db
>  (42 bytes) for commitlog position ReplayPosition(segmentId=1422296630707, 
> position=430226)
> ERROR [CompactionExecutor:8] 2015-01-26 13:32:51,906 CassandraDaemon.java 
> (line 166) Exception in thread Thread[CompactionExecutor:8,1,RMI Runtime]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(131206587314004820534098544948237170809, 
> 80010001000c62617463685f6d757461746500) >= current key 
> DecoratedKey(14775611966645399672119169777260659240, 
> 726f776b65793030385f31343232323937313537353835) writing into 
> /tmp/cass_test/cassandra/TestCassandra/data/test_ks/test_cf-1c45da40a58911e4826751fbbc77b187/test_ks-test_cf-tmp-ka-3-Data.db
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:196) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.sstable.SSTableRewriter.append(SSTableRewriter.java:110)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:177)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) 
> ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:74)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:235)
>  ~[apache-cassandra-2.1.0.jar:2.1.0]
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) 
> ~[na:1.7.0_40]
> at java.util.concurrent.FutureTask.run(FutureTask.java:262) 
> ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>  ~[na:1.7.0_40]
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>  [na:1.7.0_40]
> at java.lang.Thread.run(Thread.java:724) [na:1.7.0_40]
> {quote}
> I don't think it's caused by CASSANDRA-8211, because it happens during the 
> first compaction that takes place between the first 2 SSTables to get flushed 
> from an initially empty column family.
> Also, I've only been able to reproduce it when using both *hsha* for the rpc 
> server and *offheap_objects* for memtable allocation. If I switch either to 
> sync or to offheap_buffers or heap_buffers then I cannot reproduce the 
> problem. Also under the same circumstances I'm pretty sure I've seen 
> incorrect data being returned to a client multiget_slice request before any 
> SSTables had been flushed yet, so I presume this is corruption that happens 
> before any flush/compaction