On 14/08/17 15:23, Austin S. Hemmelgarn wrote: > Assume you have higher level verification.
But almost no applications do. In real life, the decision making/correction process will be manual and labour-intensive (for example, running fsck on a virtual disk or restoring a file from backup). > Would you rather not be able > to read the data regardless of if it's correct or not, or be able to > read it and determine yourself if it's correct or not? It must be controllable on a per-file basis, of course. For the tiny number of files where the app can both spot the problem and correct it (for example if it has a journal) the current behaviour could be used. But, on MY system, I absolutely would **always** select the first option (-EIO). I need to know that a potential problem may have occurred and will take manual action to decide what to do. Of course, this also needs a special utility (as Christoph proposed) to be able to force the read (to allow me to examine the data) and to be able to reset the checksum (although that is presumably as simple as rewriting the data). This is what happens normally with any filesystem when a disk block goes bad, but with the additional benefit of being able to examine a "possibly valid" version of the data block before overwriting it. > Looking at this from a different angle: Without background, what would > you assume the behavior to be for this? For most people, the assumption > would be that this provides the same degree of data safety that the > checksums do when the data is CoW. Exactly. The naive expectation is that turning off datacow does not prevent the bitrot checking from working. Also, the naive expectation (for any filesystem operation) is that if there is any doubt about the reliability of the data, the error is reported for the user to deal with. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html