[ 
https://issues.apache.org/jira/browse/CASSANDRA-4733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13922300#comment-13922300
 ] 

Serj Veras edited comment on CASSANDRA-4733 at 3/6/14 11:10 AM:
----------------------------------------------------------------

I have the same error using Cassandra 2.0.5.22 (DataStax package). 
I use 3 DC with 3 nodes in each of them. Error is occurred during massive 
insert workload in one of the DCs. Target CF has replication factor 2 in each 
of the DCs.
Attached Cassandra settings as "Serj_Veras_cassandra.yaml".

{code}
ERROR [CompactionExecutor:26] 2014-03-06 10:18:45,760 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:26,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-3718191715883699976, 
36633732653439302d303730632d343139352d386461342d333736383265393965316335) >= 
current key DecoratedKey(-7629226534008815744, 
62306334323161342d663662362d346364632d383965382d306563343832376639316536) 
writing into /data/db/cassandra/data/Sync/sy/Sync-sy-tmp-jb-41-Data.db
        at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
        at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
        at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
{code}

Here is the state of my cluster after error has occurred. DC3 is the 
destination of workload writes.
{code}
Datacenter: DC1
==========
Address       Rack            Status State   Load            Owns               
 Token
                                                                                
                           3074457345618258602    
10.0.0.163  RAC1        Up     Normal  14.43 GB        33.33%              
-9223372036854775808                        
10.0.0.166  RAC0        Up     Normal  14.41 GB        33.33%              
-3074457345618258603                        
10.0.0.167  RAC2        Up     Normal  14.33 GB        33.33%              
3074457345618258602                         

Datacenter: DC2
==========
Address      Rack            Status State   Load            Owns                
Token
                                                                                
                          3074457345618258603
10.0.1.145  RAC0        Up     Normal  14.46 GB        0.00%               
-9223372036854775807                        
10.0.1.147  RAC1        Up     Normal  14.39 GB        0.00%               
-3074457345618258602                        
10.0.1.149  RAC2        Up     Normal  14.43 GB        0.00%               
3074457345618258603                         

Datacenter: DC3
==========
Address     Rack               Status State   Load            Owns              
  Token
                                                                                
                            3074457345618258604
10.0.2.47   RAC0        Down   Normal  12.84 GB        0.00%               
-9223372036854775806                        
10.0.2.49   RAC1        Down   Normal  13.69 GB        0.00%               
-3074457345618258601                        
10.0.2.51   RAC2        Down   Normal  12.34 GB        0.00%               
3074457345618258604
{code} 


was (Author: sivikt):
I have the same error using Cassandra 2.0.5.22 (DataStax package). 
I use 3 DC with 3 nodes in each of them. Error is occurred during massive 
insert workload in one of the DCs. Target CF has replication factor 2 in each 
of the DCs.

{code}
ERROR [CompactionExecutor:26] 2014-03-06 10:18:45,760 CassandraDaemon.java 
(line 196) Exception in thread Thread[CompactionExecutor:26,1,main]
java.lang.RuntimeException: Last written key DecoratedKey(-3718191715883699976, 
36633732653439302d303730632d343139352d386461342d333736383265393965316335) >= 
current key DecoratedKey(-7629226534008815744, 
62306334323161342d663662362d346364632d383965382d306563343832376639316536) 
writing into /data/db/cassandra/data/Sync/sy/Sync-sy-tmp-jb-41-Data.db
        at 
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:142)
        at 
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:165)
        at 
org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:160)
        at 
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
        at 
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
        at 
org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60)
        at 
org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59)
        at 
org.apache.cassandra.db.compaction.CompactionManager$BackgroundCompactionTask.run(CompactionManager.java:197)
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
        at java.util.concurrent.FutureTask.run(FutureTask.java:262)
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:744)
{code}

Here is the state of my cluster after error has occurred. DC3 is the 
destination of workload writes.
{code}
Datacenter: DC1
==========
Address       Rack            Status State   Load            Owns               
 Token
                                                                                
                           3074457345618258602    
10.0.0.163  RAC1        Up     Normal  14.43 GB        33.33%              
-9223372036854775808                        
10.0.0.166  RAC0        Up     Normal  14.41 GB        33.33%              
-3074457345618258603                        
10.0.0.167  RAC2        Up     Normal  14.33 GB        33.33%              
3074457345618258602                         

Datacenter: DC2
==========
Address      Rack            Status State   Load            Owns                
Token
                                                                                
                          3074457345618258603
10.0.1.145  RAC0        Up     Normal  14.46 GB        0.00%               
-9223372036854775807                        
10.0.1.147  RAC1        Up     Normal  14.39 GB        0.00%               
-3074457345618258602                        
10.0.1.149  RAC2        Up     Normal  14.43 GB        0.00%               
3074457345618258603                         

Datacenter: DC3
==========
Address     Rack               Status State   Load            Owns              
  Token
                                                                                
                            3074457345618258604
10.0.2.47   RAC0        Down   Normal  12.84 GB        0.00%               
-9223372036854775806                        
10.0.2.49   RAC1        Down   Normal  13.69 GB        0.00%               
-3074457345618258601                        
10.0.2.51   RAC2        Down   Normal  12.34 GB        0.00%               
3074457345618258604
{code} 

> Last written key >= current key exception when streaming
> --------------------------------------------------------
>
>                 Key: CASSANDRA-4733
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-4733
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.2.0 beta 1
>            Reporter: Brandon Williams
>            Assignee: Yuki Morishita
>             Fix For: 1.2.0 beta 2
>
>         Attachments: Serj_Veras_cassandra.yaml
>
>
> {noformat}
> ERROR 16:52:56,260 Exception in thread Thread[Streaming to 
> /10.179.111.137:1,5,main]
> java.lang.RuntimeException: java.io.IOException: Connection reset by peer
>         at com.google.common.base.Throwables.propagate(Throwables.java:160)
>         at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: Connection reset by peer
>         at sun.nio.ch.FileDispatcher.write0(Native Method)
>         at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
>         at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:72)
>         at sun.nio.ch.IOUtil.write(IOUtil.java:43)
>         at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
>         at java.nio.channels.Channels.writeFullyImpl(Channels.java:59)
>         at java.nio.channels.Channels.writeFully(Channels.java:81)
>         at java.nio.channels.Channels.access$000(Channels.java:47)
>         at java.nio.channels.Channels$1.write(Channels.java:155)
>         at 
> com.ning.compress.lzf.ChunkEncoder.encodeAndWriteChunk(ChunkEncoder.java:133)
>         at 
> com.ning.compress.lzf.LZFOutputStream.writeCompressedBlock(LZFOutputStream.java:203)
>         at 
> com.ning.compress.lzf.LZFOutputStream.write(LZFOutputStream.java:97)
>         at 
> org.apache.cassandra.streaming.FileStreamTask.write(FileStreamTask.java:218)
>         at 
> org.apache.cassandra.streaming.FileStreamTask.stream(FileStreamTask.java:164)
>         at 
> org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91)
>         at 
> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>         ... 3 more
> ERROR 16:53:03,951 Exception in thread Thread[Thread-11,5,main]
> java.lang.RuntimeException: Last written key 
> DecoratedKey(113424593524874987650593774422007331058, 3036303936343535) >= 
> current key DecoratedKey(59229538317742990547810678738983628664, 
> 3036313133373139) writing into 
> /var/lib/cassandra/data/Keyspace1-Standard1-tmp-ia-95-Data.db
>         at 
> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:132)
>         at 
> org.apache.cassandra.io.sstable.SSTableWriter.appendFromStream(SSTableWriter.java:208)
>         at 
> org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:164)
>         at 
> org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:107)
>         at 
> org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:220)
>         at 
> org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:165)
>         at 
> org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:65)
> {noformat}
> I didn't do anything fancy here, just inserted about 6M keys at rf=2, then 
> ran repair and got this.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to