[ https://issues.apache.org/jira/browse/CASSANDRA-6696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13989356#comment-13989356 ]
Marcus Eriksson commented on CASSANDRA-6696: -------------------------------------------- summing up the discussion; * one "stripe" is one vnode * we flush to big files in L0, file per disk or perhaps group a bunch of vnodes together to increase the amount of parallel compactions we can do L0 -> L1 for STCS: * we introduce L0 for STCS * when we end up with a given number of overlapping L0 files (4), we compact those together and create per-vnode L1 files. * major compaction: include all files in compaction, write #vnodes files for LCS: * We introduce a leveled manifest per vnode * L0 is "global" * when doing L0 -> L1 compactions, we end up with one file per involved vnode-stripe in L1, here we can gain a lot by not flushing too big L0 files. * we still do STCS within L0 if we get too much data here, making sure we only compact overlapping files anything i missed/misunderstood? > Drive replacement in JBOD can cause data to reappear. > ------------------------------------------------------ > > Key: CASSANDRA-6696 > URL: https://issues.apache.org/jira/browse/CASSANDRA-6696 > Project: Cassandra > Issue Type: Improvement > Components: Core > Reporter: sankalp kohli > Assignee: Marcus Eriksson > Fix For: 3.0 > > > In JBOD, when someone gets a bad drive, the bad drive is replaced with a new > empty one and repair is run. > This can cause deleted data to come back in some cases. Also this is true for > corrupt stables in which we delete the corrupt stable and run repair. > Here is an example: > Say we have 3 nodes A,B and C and RF=3 and GC grace=10days. > row=sankalp col=sankalp is written 20 days back and successfully went to all > three nodes. > Then a delete/tombstone was written successfully for the same row column 15 > days back. > Since this tombstone is more than gc grace, it got compacted in Nodes A and B > since it got compacted with the actual data. So there is no trace of this row > column in node A and B. > Now in node C, say the original data is in drive1 and tombstone is in drive2. > Compaction has not yet reclaimed the data and tombstone. > Drive2 becomes corrupt and was replaced with new empty drive. > Due to the replacement, the tombstone in now gone and row=sankalp col=sankalp > has come back to life. > Now after replacing the drive we run repair. This data will be propagated to > all nodes. > Note: This is still a problem even if we run repair every gc grace. > -- This message was sent by Atlassian JIRA (v6.2#6252)