On Mon, 2017-08-14 at 15:46 +0800, Qu Wenruo wrote:
> The problem here is, if you enable csum and even data is updated 
> correctly, only metadata is trashed, then you can't even read out
> the 
> correct data.

So what?
This problem occurs anyway *only* in case of a crash,.. and *only* if
notdatacow+checksumung would be used.
A case in which currently, the user can either only hope that his data
is fine (unless higher levels provide some checksumming means[0]), or
anyway needs to recover from a backup.

Intuitively I'd also say it's much less likely that the data (which is
more in terms of space) is written correctly while the checksum is not.
Or is it?



[0] And when I've investigated back when discussion rose up the first
time and some list member claimed that most typical cases (DBs, VM
images) would anyway do their own checksuming,... I came to the
conclusion that most did not even support it and even if they would
it's no enabled per default and not really a *full* checksumming in
most cases.



> As btrfs csum checker will just prevent you from reading out any
> data 
> which doesn't match with csum.
As I've said before, a tool could be provided, that re-computes the
checksums then (making the data accessible again)... or one could
simply mount the fs with nochecksum or some other special option, which
allows bypassing any checks.

> Now it's not just data corruption, but data loss then.
I think the former is worse than the later. The later gives you a
chance of noting it, and either recover from a backup, regenerate the
data (if possible) or manually mark the data as being "good" (though
corrupted) again.


Cheers,
Chris.

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to