[ 
https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986688#comment-13986688
 ] 

Marcus Eriksson commented on CASSANDRA-6696:
--------------------------------------------

Pushed a semi-working sstable-per vnode version here: 
https://github.com/krummas/cassandra/commits/marcuse/6696-3 (by no means 
review-ready)

* flushes to vnode-separate sstables, spread out over the disks available
* keeps the sstables separate during compaction, for STCS by grouping the 
compactionbuckets by overlapping sstables, and with LCS by keeping a separate 
manifest for every vnode.

Still quite broken, but i think good enough to evaluate if we want to go this 
way, drawback is mainly that it takes a looong time to flush to 768 sstables 
instead of one (768 = num_tokens=256 and rf = 3). Doing 768 parallel 
compactions is also quite heavy. 

Unless anyone has a brilliant idea how to make flushing and compaction less 
heavy, I think we need some sort of balance here, maybe grouping the vnodes (8 
or 16 vnodes per sstable perhaps?) so that we flush a more reasonable amout of 
sstables, or even just going with the per-disk approach? 

> Drive replacement in JBOD can cause data to reappear. 
> ------------------------------------------------------
>
>                 Key: CASSANDRA-6696
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-6696
>             Project: Cassandra
>          Issue Type: Improvement
>          Components: Core
>            Reporter: sankalp kohli
>            Assignee: Marcus Eriksson
>             Fix For: 3.0
>
>
> In JBOD, when someone gets a bad drive, the bad drive is replaced with a new 
> empty one and repair is run. 
> This can cause deleted data to come back in some cases. Also this is true for 
> corrupt stables in which we delete the corrupt stable and run repair. 
> Here is an example:
> Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. 
> row=sankalp col=sankalp is written 20 days back and successfully went to all 
> three nodes. 
> Then a delete/tombstone was written successfully for the same row column 15 
> days back. 
> Since this tombstone is more than gc grace, it got compacted in Nodes A and B 
> since it got compacted with the actual data. So there is no trace of this row 
> column in node A and B.
> Now in node C, say the original data is in drive1 and tombstone is in drive2. 
> Compaction has not yet reclaimed the data and tombstone.  
> Drive2 becomes corrupt and was replaced with new empty drive. 
> Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp 
> has come back to life. 
> Now after replacing the drive we run repair. This data will be propagated to 
> all nodes. 
> Note: This is still a problem even if we run repair every gc grace. 
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to