2012-01-13 4:26, Richard Elling wrote:
On Jan 12, 2012, at 4:12 PM, Jim Klimov wrote:
Alternatively (opportunistically), a flag might be set
in the DDT entry requesting that a new write mathching
this stored checksum should get committed to disk - thus
"repairing" all files which reference the block (at least,
stopping the IO errors).

verify eliminates this failure mode.

Thinking about it... got more questions:

In this case: DDT/BP contain multiple references with
correct checksums, but the on-disk block is bad.
Newly written block has the same checksum, and verification
proves that on-disk data is different byte-to-byte.

1) How does the write-stack interact with those checksums
   that do not match the data? Would any checksum be tested
   for this verification read of existing data at all?

2) It would make sense for the failed verification to
   have the new block committed to disk, and a new DDT
   entry with same checksum created. I would normally
   expect this to be the new unique block of a new file,
   and have no influence on existing data (block chains).
   However in the discussed problematic case, this safe
   behavior would also mean not contributing to reparation
   of those existing block chains which include the
   mismatching on-disk block.

Either I misunderstand some of the above, or I fail to
see how verification would eliminate this failure mode
(namely, as per my suggestion, replace the bad block
with a good one and have all references updated and
block-chains -> files fixed with one shot).

Would you please explain?
Thanks,
//Jim Klimov

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to