> The Validated Execution project is investigating how to utilize ZFS
> snapshots as the basis of a "validated" filesystem.  Given that the
> blocks of the dataset form a Merkel tree of hashes, it seemed
> straightforward to validate the individual objects in the snapshot and
> then sign the hash of the root as a means of indicating that the
> contents of the dataset were validated.

Yep, that would work.

> Unfortunately, the block hashes are used to assure the integrity of the
> physical representation of the dataset.  Those hash values can be
> updated during scrub operations, or even during data error recovery,
> while the logical content of the dataset remains intact.

Actually, that's not true -- at least not today.  Once you've taken a
snapshot, the content will never change.  Scrub, resilver, and self-heal
operations repair damaged copies of data, but they don't alter the
data itself, and therefore don't alter its checksum.

This will change when we add support for block rewrite, which will
allow us to do things like migrate data from one device to another,
or to recompress existing data, which *will* affect the checksum.

You may be able to tolerate this by simply precluding it, if you're
targeting a restricted environment.  For example, do you need this
feature for anything other than the root pool?

Jeff
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to