Andrew Lentvorski wrote:
10^15 bits / 8 bits per byte = 1/8 * 10^15 = .125 * 10^15 = 125 * 10^12
Gahhhh. 10^12 is *Tera*. Stupid. About 100 Terabytes. That's more
reasonable but kinda scary. ZFS with its internal checksums and
recovery is still looking pretty good.
A lot of filesystems do this. XFS, for one, does. Any good extent-based
filesystem *should* perform block hashes. ext2/3 do *not*.
As a side note, when doing rough approximations, taking 1 byte as 10
bits is normally "close enough".
This depends on the disk encoding method as well. For each byte written
to disk, there are usually an additional two bits for parity, and three
bits for start/stop code. Hard disks are actually *very* inaccurate from
a media level; that many flux transitions in such a small aureal space
(ok, well the linear part of aureal space, anyways) is hard to keep
accurate. At the end of every sector there are also a few bytes for some
sort of EDAC like reed-solomon code; your 512-byte sectors are typically
520 bytes on-disk. For each 512-byte sector, there should be 4096 bits.
In actuality, though, the number of bits is much higher -- 520 recording
units per sector times 13 bits per recording unit = 6760 bits per
sector. That means there's almost a 40% overhead!
Fortunately, we have things like EPRML, 2,7 RLL, and ZBR to increase the
aureal density of the disks. Unfortunately, however, the closer-together
these flux transitions become, the more unreliable the disks become. So,
the magnetic signature has to be encoded weaker, wich in turn decreases
the S/N ratio, making EPRML have a harder job. Fortunately, we have
physicists to figure out these problems for us. (note I've completely
left out other, older types of disks -- classic disks didn't use any of
this advanced technology).
-Kelsey
--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list