[ 
https://issues.apache.org/jira/browse/CASSANDRA-18824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17805741#comment-17805741
 ] 

Szymon Miezal commented on CASSANDRA-18824:
-------------------------------------------

Yes it failed in the final assertion which is odd. The reason to start the 1k 
run for 3.11 again was that I am curious whether it's a failure that occurs 
more frequently on 3.11 than other versions. I would think that if there is any 
flakiness then it exists in all version and those failures will surface again. 
I also wonder whether there were any failures of that test recorded on 4.x in 
the past.

We have two options:
 * Try to track down why does this failure happen - is it a testcase itself 
being imperfect or maybe there is still a race in the code.
 * Merge what we have considering it's still an improvement in comparison to 
not guarding against the not-safe cleanup at all.

> Backport CASSANDRA-16418: Cleanup behaviour during node decommission caused 
> missing replica
> -------------------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-18824
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-18824
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Consistency/Bootstrap and Decommission
>            Reporter: Szymon Miezal
>            Assignee: Szymon Miezal
>            Priority: Normal
>             Fix For: 3.0.x, 3.11.x, 4.0.x, 4.1.x, 5.0.x, 5.x
>
>          Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Node decommission triggers data transfer to other nodes. While this transfer 
> is in progress,
> receiving nodes temporarily hold token ranges in a pending state. However, 
> the cleanup process currently doesn't consider these pending ranges when 
> calculating token ownership.
> As a consequence, data that is already stored in sstables gets inadvertently 
> cleaned up.
> STR:
>  * Create two node cluster
>  * Create keyspace with RF=1
>  * Insert sample data (assert data is available when querying both nodes)
>  * Start decommission process of node 1
>  * Start running cleanup in a loop on node 2 until decommission on node 1 
> finishes
>  * Verify of all rows are in the cluster - it will fail as the previous step 
> removed some of the rows
> It seems that the cleanup process does not take into account the pending 
> ranges, it uses only the local ranges - 
> [https://github.com/apache/cassandra/blob/caad2f24f95b494d05c6b5d86a8d25fbee58d7c2/src/java/org/apache/cassandra/db/compaction/CompactionManager.java#L466].
> There are two solutions to the problem.
> One would be to change the cleanup process in a way that it start taking 
> pending ranges into account. Even thought it might sound tempting at first it 
> will require involving changes and a lot of testing effort.
> Alternatively we could interrupt/prevent the cleanup process from running 
> when any pending range on a node is detected. That sounds like a reasonable 
> alternative to the problem and something that is relatively easy to implement.
> The bug has been already fixed in 4.x with CASSANDRA-16418, the goal of this 
> ticket is to backport it to 3.x.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org

Reply via email to