Richard Elling wrote:
Erik Trimble wrote:
If you had known about the drive sizes beforehand, the you could have
done something like this:

Partition the drives as follows:

A:  1 20GB partition
B:  1 20gb & 1 10GB partition
C:  1 40GB partition
D:  1 40GB partition & 2 10GB paritions

then you do:

zpool create tank mirror Ap0 Bp0 mirror Cp0 Dp0 mirror Bp1 Dp1

and you get a total of 70GB of space. However, the performance on this
is going to be bad (as you frequently need to write to both partitions
on B & D, causing head seek), though you can still lose up to 2 drives
before experiencing data loss.

It is not clear to me that we can say performance will be bad
for stripes on single disks.  The reason is that ZFS dynamic
striping does not use a fixed interleave.  In other words, if
I write a block of N bytes to a M-way dynamic stripe, it is
not guaranteed that each device will get an I/O of N/M size.
I've only done a few measurements of this, and I've not completed
my analysis, but my data does not show the sort of thrashing one
might expect from a fixed stripe with small interleave.
 -- richard
That is correct, Richard. However, it applies to relatively small read/writes, which do not exceed the max stripe size. Now, this is probably pretty likely, but there is another issue here: even given that not all disks will have an I/O on a stripe access, there still is a relatively good chance that both partitions on the disk get an I/O request. On average, I'd assume that you don't really improve much over a full-stripe I/O, and in either case, it would be worse than a zpool which did not have multiple partitions on the same disk. Also, for large file access - where you guaranty the need for full-stripe access - you certainly are going to disk thrash.

Numbers would be nice, of course. :-)

--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to