On 9/26/06, Richard Elling - PAE <[EMAIL PROTECTED]> wrote:
Chris Csanady wrote:
> What I have observed with the iosnoop dtrace script is that the
> first disks aggregate the single block writes, while the last disk(s)
> are forced to do numerous writes every other sector.  If you would
> like to reproduce this, simply copy a large file to a recordsize=4k
> filesystem on a 4 disk RAID-Z.

Why would I want to set recordsize=4k if I'm using large files?
For that matter, why would I ever want to use a recordsize=4k, is
there a database which needs 4k record sizes?

Sorry, I wasn't very clear about the reasoning for this.  It is not
something that you would normally do, but it generates just
the right combination of block size and stripe width to make the
problem very apparent.

It is also possible to encounter this on a filesystem with the
default recordsize, and I have observed the effect while extracting
a large archive of sources.  Still, it was never bad enough for my
uses to be anything more than a curiosity.  However, while trying
to rsync 100M ~1k files onto a 4 disk RAID-Z, Gino Ruopolo
seemingly stumbled upon this worst case performance scenerio.
(Though, unlike my example, it is also possible to end up with
holes in the second column.)

Also, while it may be a small error, could these stranded sectors
throw off the space accounting enough to cause problems when
a pool is nearly full?

Chris
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to