[ 
https://issues.apache.org/jira/browse/CASSANDRA-12510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15838217#comment-15838217
 ] 

Nick Bailey commented on CASSANDRA-12510:
-----------------------------------------

Removing the no argument version of decommission here was a breaking API change 
that we should have done in a major release (which tick tock makes weird but 
still). This is more of an informational comment than anything because I'm not 
sure it's worth fixing now, but just a reminder to keep an eye out for breaking 
JMX changes.

> Disallow decommission when number of replicas will drop below configured RF
> ---------------------------------------------------------------------------
>
>                 Key: CASSANDRA-12510
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-12510
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Streaming and Messaging
>         Environment: C* version 3.3
>            Reporter: Atin Sood
>            Assignee: Kurt Greaves
>            Priority: Minor
>              Labels: lhf
>             Fix For: 3.12
>
>         Attachments: 12510-3.x.patch, 12510-3.x-v2.patch
>
>
> Steps to replicate :
> - Create a 3 node cluster in DC1 and create a keyspace test_keyspace with 
> table test_table with replication strategy NetworkTopologyStrategy , DC1=3 . 
> Populate some data into this table.
> - Add 5 more nodes to this cluster, but in DC2. Also do not alter the 
> keyspace to add the new DC2 to replication (this is intentional and the 
> reason why the bug shows up). So the desc keyspace should still list 
> NetworkTopologyStrategy with DC1=3 as RF
> - As expected, this will now be a 8 node cluster with 3 nodes in DC1 and 5 in 
> DC2
> - Now start decommissioning the nodes in DC1. Note that the decommission runs 
> fine on all the 3 nodes, but since the new nodes are in DC2 and the RF for 
> keyspace is restricted to DC1, the new 5 nodes won't get any data.
> - You will now end with the 5 node cluster which has no data from the 
> decommissioned 3 nodes and hence ending up in data loss
> I do understand that this problem could have been avoided if we perform an 
> alter stmt and add DC2 replication before adding the 5 nodes. But the fact 
> that decommission ran fine on the 3 nodes on DC1 without complaining that 
> there were no nodes to stream its data seems a little discomforting. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to