>>>>> "ca" == Carsten Aulbert <[EMAIL PROTECTED]> writes:

    ca> (a) Why the first vdev does not get an equal share
    ca> of the load

I don't know.  but, if you don't add all the vdev's before writing
anything, there's no magic to make them balance themselves out.  Stuff
stays where it's written.  I'm guessing you did add them at the same
time, and they still filled up unevenly?

'zpool iostat' that you showed is the place I found to see how data is
spread among vdev's.

    ca>  (b) Why is a large raidz2 so bad? When I use a
    ca> standard Linux box with hardware raid6 over 16 disks I usually
    ca> get more bandwidth and at least about the same small file
    ca> performance

obviously there are all kinds of things going on but...the standard
answer is, traditional RAID5/6 doesn't have to do full stripe I/O.
ZFS is more like FreeBSD's RAID3: it gets around the NVRAMless-RAID5
write hole by always writing a full stripe, which means all spindles
seek together and you get the seek performance of 1 drive (per vdev).
Linux RAID5/6 just gives up and accepts a write hole, AIUI, but
because the stripes are much fatter than a filesystem block, you'll
sometimes get the record you need by seeking a subset of the drives
rather than all of them, which means the drives you didn't seek have
the chance to fetch another record.

If you're saying you get worse performance than a single spindle, I'm
not sure why.

Attachment: pgpUTlo8kBKC2.pgp
Description: PGP signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to