[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13958664#comment-13958664
 ] 

Benedict commented on CASSANDRA-6696:
-------------------------------------

A further suggestion: whilst we know vnodes don't currently distribute 
perfectly, this would be much simpler and more robust if we said that each disk 
simply gets assigned 1/#disks contiguous portion of the total (global) token 
range. This way, once we migrate to the new layout we _never have to worry 
about it again_. As things stand, any addition or removal of a single node, or 
change in RF, triggers a need to rewrite _the entire cluster_. Whilst this does 
ensure even distribution acriss the disks, this seems like we leave some major 
holes in the protection we're offering, and filling them may be error prone 
(and certainly costly).

So, my suggestion is that we permit this feature only for vnodes. We can, at 
the same time, perhaps visit the question of more deterministically allocating 
vnode ranges so that the cluster is evenly distributed.

[~kohlisankalp], what do you think?


> Drive replacement in JBOD can cause data to reappear. 
> ------------------------------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: sankalp kohli
>            Assignee: Marcus Eriksson
>             Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to