On 11/08/10 16:08, Svein Skogen (Listmail account) wrote:
On 08.11.2010 16:37, Arthur Chance wrote:
On 11/08/10 13:52, krad wrote:
On 6 November 2010 21:38, Roland Smith<rsm...@xs4all.nl>   wrote:

On Sat, Nov 06, 2010 at 02:30:16PM -0600, Chad Perrin wrote:
Having said all that it really depends on whether you need the extra
features of zfs. Personally I cant see how anyone with any important
data
can do without checksuming.

I guess that depends on what you're doing with the data and what
kind of
external tools you have in place to protect/duplicate it in case of a
problem.

The GEOM_ELI class provides optional authentication/checksumming. See
geli(8),
especially the -a option.

Roland
--
R.F.Smith
http://www.xs4all.nl/~rsmith/<http://www.xs4all.nl/%7Ersmith/>
[plain text _non-HTML_ PGP/GnuPG encrypted/signed email much
appreciated]
pgp: 1A2B 477F 9970 BA3C 2914  B7CE 1277 EFB0 C321 A725 (KeyID:
C321A725)


im not sure on whether that you be a viable replacement, as it has to
be a
fairly good checksum to avoid clashes, whilst also being quick so it
doesnt
adversly affect disk performance. Also what does it do if it detects the
checksum doesnt match etc?

Good point. Geli uses a crypto standard hash (HMAC/SHA256 is
recommended) as it's all about authentication in the face of potentially
malicious attack, and that's fairly expensive. ZFS by default uses the
fletcher2 (= fletcher32) hash, which is simple and fast, as it's used to
make sure that hardware hasn't accidentally mangled your data.

But it's still not capable of true forward-error-correction. If we are
to embark upon creating a new solution, using something that is cheap
for "normal cases" but can still be used (albeit more expensively) for
error recovery would (imho) be better. Even if that means we get less
net storage out of the gross pool (it could perhaps be configurable?)

Presuming you're talking about ZFS, the hash isn't intended to correct hardware errors, it's only there to detect them. Correction comes from mirroring or the use of RAIDZ{1,2,3}. (I have personal experience of how well that works, as I had a disk in a RAIDZ array go bad suddenly, and I didn't lose any data.) Any new solution would almost certainly mimic ZFS's approach of arranging the data as a Merkel tree, and using multiple copies or N out of M shares for correction. I'm not sure GEOM's block orientation fits well with Merkel trees though, although I'd be happy to be corrected by a GEOM expert.

--
"Although the wombat is real and the dragon is not, few know what a
wombat looks like, but everyone knows what a dragon looks like."

        -- Avram Davidson, _Adventures in Unhistory_
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to