On Oct 1, 2010, at 16:20, Douglas E. Engert wrote:

> On 10/1/2010 4:35 AM, Harald Barth wrote:
>> 
>> Another way to tackle the data corruption issue in the AFS case would
>> be to add checksum functionality to the fileserver backend. In
>> contrast to NFS, we have the advantage that noone reads the data
>> directly from the file system but always through the client.
> 
> ZFS stores the checksum of a block, not in the block, but in a higher level
> block. This can then detect if a block failed to be written when its read 
> back.
> Other file systems could not detect this, as they would read old data,
> with an old checksum.

We've also seen disks randomly reading and returning a different sector than 
the one requested.

> Keep this in mind if checksums are added to AFS.

This would be an awesome feature to have. Presumably, for each file one would 
keep an auxiliary one with, say,  4 or 8 bytes of checksum for each 512 bytes 
block of payload? Or 64 bytes of Hamming codes, allowing single bit error 
correction and dual bit detection?

-- 
Stephan Wiesand
DESY -DV-
Platanenallee 6
15738 Zeuthen, Germany




Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to