>As soon as you have more then one disk in the equation, then it is
>vital that the disks commit their data when requested since otherwise
>the data on disk will not be in a consistent state.

ok, but doesn`t that refer only to the most recent data?
why can i loose a whole 10TB pool including all the snapshots with the 
logging/transactional nature of zfs?

isn`t the data in the snapshots set to read only so all blocks with snapshotted 
data don`t change over time (and thus give an secure "entry" to a consistent 
point in time) ?

ok, this are probably some short-sighted questions, but i`m trying to 
understand how things could go wrong with zfs and how issues like these happen.

on other filesystems, we have tools for fsck as a last resort or tools to 
recover data from unmountable filesystems. 
with zfs i don`t know any of these, so it`s that "will solaris mount my zfs 
after the next crash?" question which frightens me a little bit.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to