On Mon, 2018-03-12 at 22:22 +0100, Goffredo Baroncelli wrote: > Unfortunately no, the likelihood might be 100%: there are some > patterns which trigger this problem quite easily. See The link which > I posted in my previous email. There was a program which creates a > bad checksum (in COW+DATASUM mode), and the file became unreadable. But that rather seems like a plain bug?!
No reason that would conceptually make checksumming+notdatacow impossible. AFAIU, the conceptual thin would be about: - data is written in nodatacow => thus a checksum must be written as well, so write it - what can then of course happen is - both csum and data are written => fine - csum is written but data not and then some crash => csum will show that => fine - data is written but csum not and then some crash => csum will give false positive Still better few false positives, as many unnoticed data corruptions and no true raid repair. > If you cannot know if a checksum is bad or the data is bad, the > checksum is not useful at all! Why not? It's anyway only uncertain in the case of crash,... and it at least tells you that something is fishy. A program which cares about its data will ensure its own journaling means and can simply recover by this... or users could then just roll in a backup. Or one could provide some API/userland tool to recompute the csums of the affected file (and possibly live with bad data). > If I read correctly what you wrote, it seems that you consider a > "minor issue" the fact that the checksum is not correct. If you > accept the possibility that a checksum might be wrong, you wont trust > anymore the checksum; so the checksum became not useful. There's simply no disadvantage to not having checksumming at all in the nodatacow case. Cause then you never have an idea whether your data is correct or not... the case with checksumming + datacow, which can give a false positive on a crash when data was written correctly, but not the checksum, covers at least the other cases of data corruption (silent data corruption, csum written, but data not or only partially in case of a crash). > Again, you are assuming that the likelihood of having a bad checksum > is low. Unfortunately this is not true. There are pattern which > exploits this bug with a likelihood=100%. Okay I don't understand why this would be so and wouldn't assume that the IO pattern can affect it heavily... but I'm not really btrfs expert. My blind assumption would have been that writing an extent of data takes much longer to complete than writing the corresponding checksum. Even if not... I should be only a problem in case of a crash during that,.. and than I'd still prefer to get the false positive than bad data. Anyway... it's not going to happen so the discussion is pointless. I think people can probably use dm-integrity (which btw: does no CoW either (IIRC) and still can provide integrity... ;-) ) to see whether their data is valid. No nice but since it won't change on btrfs, a possible alternative. Cheers, Chris. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html