Frank Cusack wrote:
> On July 14, 2008 9:54:43 PM -0700 Frank Cusack <[EMAIL PROTECTED]> wrote:
>   
>> On July 14, 2008 7:49:58 PM -0500 Bob Friesenhahn 
>>     
> <[EMAIL PROTECTED]> wrote:
>   
>>>> It sounds like they're talking more about traditional hardware RAID
>>>> but is this also true for ZFS?  Right now I've got four 750GB drives
>>>> that I'm planning to use in a raid-z 3+1 array.  Will I get markedly
>>>> better performance with 5 drives (2^2+1) or 6 drives 2*(2^1+1)
>>>> because the parity calculations are more efficient across N^2
>>>> drives?
>>>>         
>>> With ZFS and modern CPUs, the parity calculation is surely in the noise
>>> to the point of being unmeasurable.
>>>       
>> I would agree with that.  The parity calculation has *never* been a
>> factor in and of itself.  The problem is having to read the rest of
>> the stripe and then having to wait for a disk revolution before writing.
>>     
>
> oh, you know what though?  raid-z had this bug, or maybe we should just
> call it a behavior, where you only want an {even,odd} number of drives
> in the vdev.  I can't remember if it was even or odd.  Or maybe it was
> that you wanted only N^2+1 disks, choose any N.  Otherwise you had
> suboptimal performance in certain cases.  I can't remember the exact
> details but it wasn't because of "more efficient parity calculations".
> Maybe something about block sizes having to be powers of two and the
> wrong number of disks forcing a read?
>
> Anybody know what I'm referring to?  Has it been fixed?  I see the
> zfs best practices guide says to use only odd numbers of disks, but
> it doesn't say why.  (don't you hate that?)
>   

See the "Metaslab alignment" thread.
http://www.opensolaris.org/jive/thread.jspa?messageID=60241&#60241
 -- richard

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to