> Host w continuously has a UFS mounted with read/write
> access.
> Host w writes to the file f/ff/fff.
> Host w ceases to touch anything under f.
> Three hours later, host r mounts the file system read-only,
> reads f/ff/fff, and unmounts the file system.

This would probably work for a non-journaled file system, because UFS will 
flush all of the modified data and inodes in the three hour timespan (actually, 
much faster than that). I'm not sure that it would work if journaling was 
enabled, though; I don't recall any timeouts for pushing transactions out of 
the log. You could flush the journal using 'lockfs -f' before the delay, but 
it's still not 100% reliable.

> a1) This scenario won't hurt w,
> a2) this scenario won't damage the data on the file system,

True (as long as R never mounts read/write).

> a3) this scenario won't hurt r, and
> a4) the read operation will succeed,

It could conceivably panic R if R sees an inconsistency on the file system. 
This is actually very unlikely in newer releases of UFS; nearly the only panics 
left (IIRC) are in cases where UFS is trying to allocate or deallocate a block 
and discovers an inconsistency in the bitmap. (Corrupt directories and inodes 
will simply log a warning and return EIO.) I'd be very cautious, though.

QFS in multi-reader mode can solve this problem very easily, as the readers 
don't cache file system metadata much (the timing is tunable), and will 
invalidate data caches if the metadata describing the file changes. I don't 
know if Sun prices multi-reader QFS lower than shared QFS, but it's worth 
talking to your salesperson.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to