[ 
https://issues.apache.org/jira/browse/CASSANDRA-16349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17499330#comment-17499330
 ] 

Ekaterina Dimitrova commented on CASSANDRA-16349:
-------------------------------------------------

I am on the phone now but it seems you point to the run from a few days ago 
which triggered Markus to ping me and I made the commit earlier today. The test 
shouldn’t be failing anymore. 

> SSTableLoader reports error when SSTable(s) do not have data for some nodes
> ---------------------------------------------------------------------------
>
>                 Key: CASSANDRA-16349
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-16349
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tool/sstable
>            Reporter: Serban Teodorescu
>            Assignee: Serban Teodorescu
>            Priority: Normal
>             Fix For: 4.1, 4.0.4
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> Running SSTableLoader in verbose mode will show error(s) if there are node(s) 
> that do not own any data from the SSTable(s). This can happen in at least 2 
> cases:
>  # SSTableLoader is used to stream backups while keeping the same token ranges
>  # SSTable(s) are created with CQLSSTableWriter to match token ranges (this 
> can bring better performance by using ZeroCopy streaming)
> Partial output of the SSTableLoader:
> {quote}ERROR 02:47:47,842 [Stream #fa8e73b0-3da5-11eb-9c47-c5d27ae8fe47] 
> Remote peer /127.0.0.4:7000 failed stream session.
> ERROR 02:47:47,842 [Stream #fa8e73b0-3da5-11eb-9c47-c5d27ae8fe47] Remote peer 
> /127.0.0.3:7000 failed stream session.
> progress: [/127.0.0.4:7000]0:0/1 100% [/127.0.0.3:7000]0:0/1 100% 
> [/127.0.0.2:7000]0:7/7 100% [/127.0.0.1:7000]0:7/7 100% total: 100% 
> 0.000KiB/s (avg: 1.611KiB/s)
> progress: [/127.0.0.4:7000]0:0/1 100% [/127.0.0.3:7000]0:0/1 100% 
> [/127.0.0.2:7000]0:7/7 100% [/127.0.0.1:7000]0:7/7 100% total: 100% 
> 0.000KiB/s (avg: 1.611KiB/s)
> progress: [/127.0.0.4:7000]0:0/1 100% [/127.0.0.3:7000]0:0/1 100% 
> [/127.0.0.2:7000]0:7/7 100% [/127.0.0.1:7000]0:7/7 100% total: 100% 
> 0.000KiB/s (avg: 1.515KiB/s)
> progress: [/127.0.0.4:7000]0:0/1 100% [/127.0.0.3:7000]0:0/1 100% 
> [/127.0.0.2:7000]0:7/7 100% [/127.0.0.1:7000]0:7/7 100% total: 100% 
> 0.000KiB/s (avg: 1.427KiB/s)
> {quote}
>  
> Stack trace:
> {quote}java.util.concurrent.ExecutionException: 
> org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> com.google.common.util.concurrent.AbstractFuture.getDoneValue(AbstractFuture.java:552)
> at 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:533)
> at org.apache.cassandra.tools.BulkLoader.load(BulkLoader.java:99)
> at org.apache.cassandra.tools.BulkLoader.main(BulkLoader.java:49)
> Caused by: org.apache.cassandra.streaming.StreamException: Stream failed
> at 
> org.apache.cassandra.streaming.management.StreamEventJMXNotifier.onFailure(StreamEventJMXNotifier.java:88)
> at 
> com.google.common.util.concurrent.Futures$CallbackListener.run(Futures.java:1056)
> at 
> com.google.common.util.concurrent.DirectExecutor.execute(DirectExecutor.java:30)
> at 
> com.google.common.util.concurrent.AbstractFuture.executeListener(AbstractFuture.java:1138)
> at 
> com.google.common.util.concurrent.AbstractFuture.complete(AbstractFuture.java:958)
> at 
> com.google.common.util.concurrent.AbstractFuture.setException(AbstractFuture.java:748)
> at 
> org.apache.cassandra.streaming.StreamResultFuture.maybeComplete(StreamResultFuture.java:220)
> at 
> org.apache.cassandra.streaming.StreamResultFuture.handleSessionComplete(StreamResultFuture.java:196)
> at 
> org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:505)
> at 
> org.apache.cassandra.streaming.StreamSession.complete(StreamSession.java:819)
> at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:595)
> at 
> org.apache.cassandra.streaming.async.StreamingInboundHandler$StreamDeserializingTask.run(StreamingInboundHandler.java:189)
> at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> at java.base/java.lang.Thread.run(Thread.java:844)
> {quote}
> To reproduce create a cluster with ccm with more nodes than the RF, put some 
> data into it copy a SSTable and stream it.
>  
> The error originates on the nodes, the following stack trace is shown in the 
> logs:
> {quote}java.lang.IllegalStateException: Stream hasn't been read yet
>         at 
> com.google.common.base.Preconditions.checkState(Preconditions.java:507)
>         at 
> org.apache.cassandra.db.streaming.CassandraIncomingFile.getSize(CassandraIncomingFile.java:96)
>         at 
> org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:789)
>         at 
> org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:587)
>         at 
> org.apache.cassandra.streaming.async.StreamingInboundHandler$StreamDeserializingTask.run(StreamingInboundHandler.java:189)
>         at 
> io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
>         at java.base/java.lang.Thread.run(Thread.java:844)
> {quote}
>  
> An error is thrown due to stream size being read before any data was 
> received. The solution would be not to stream this at all; SSTableLoader.java 
> already looks into each SSTable to determine what parts of it will map to 
> each node token ranges. 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to