On Mon, 18 Jan 2010, Tristan Ball wrote:

Is there a way to check the recordsize of a given file, assuming that the filesystems recordsize was changed at some point?

This would be problematic since a file may consist of different size records (at least I think so). If the record size was changed after the file was already created, then new/updated parts would use the new record size.

Also - Am I right in thinking that if a 4K write is made to a filesystem block with a recordsize of 8K, then the original block is read (assuming it's not in the ARC), before the new block is written elsewhere (the "copy", from copy on write)? This would be one of the reasons that aligning application IO size and filesystem record sizes is a good thing, because where such IO is aligned, you remove the need for that original read?

This is exactly right. There is a very large performance hit if the block to be updated is no longer in the ARC and the update does not perfectly align to the origin and size of the underlying block. Applications which are aware of this (and which expect the total working set to be much larger than available cache) could choose to read and write more data than absolutely required so that zfs does not need to read an existing block in order to update it. This also explains why the l2arc can be so valuable, if the data then fits in the ARC.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to