On Wed, 20 May 2009, Wong, James (Nagi Long) wrote:

I understand zfs has high performance write, even with lots snapshots.
How about read performance?  To me, it will be a very fragmented due to
the COW.  And no matter how careful you allocate the blocks, you can
only put a SHARED block in the proximity of one particular snapshot and
to other snapshots, the shared block won't be in proximity - a
fragmentation situation.

This is indeed an interesting issue. In practice, ZFS writes essentially unfragmented files for normal sequential writes, which results in an unfragmented read. It uses a "slab" allocator which pre-allocates large runs of blocks at a time. The default ZFS block size is quite large (128K) so worst-case fragmentation causes much less disk seeking than filesystems using tiny (4K or 8K) blocks. ZFS buffers data to be written for considerable time (up to 30 seconds) in the hope that the data can be written more contiguously when it comes time to write it to disk.

Fragmentation becomes an issue for applications which update individual ZFS blocks (like a database) rather than re-writing the file. Fragmentation also becomes an issue when the pool is extremely full. Fragmentation is an issue when the block size is set very small (e.g. 8k) but usually this small block size is used intentionally for database purposes.

Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,    http://www.GraphicsMagick.org/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to