On May 15, 2011, at 10:18 AM, Jim Klimov <jimkli...@cos.ru> wrote:

> Hi, Very interesting suggestions as I'm contemplating a Supermicro-based 
> server for my work as well, but probably in a lower budget as a backup store 
> for an aging Thumper (not as its superior replacement).
> 
> Still, I have a couple of questions regarding your raidz layout 
> recommendation.
> 
> On one hand, I've read that as current drives get larger (while their random 
> IOPS/MBPS don't grow nearly as fast with new generations), it is becoming 
> more and more reasonable to use RAIDZ3 with 3 redundancy drives, at least for 
> vdevs made of many disks - a dozen or so. When a drive fails, you still have 
> two redundant parities, and with a resilver window expected to be in hours if 
> not days range, I would want that airbag, to say the least. You know, 
> failures rarely come one by one ;)

Not to worry. If you add another level of redundancy, the data protection
is improved by orders of magnitude. If the resilver time increases, the effect
on data protection is reduced by a relatively small divisor. To get some sense 
of this, the MTBF is often 1,000,000,000 hours and there are only 24 hours in
a day.

> On another hand, I've recently seen many recommendations that in a RAIDZ* 
> drive set, the number of data disks should be a power of two - so that ZFS 
> blocks/stripes and those of of its users (like databases) which are inclined 
> to use 2^N-sized blocks can be often accessed in a single IO burst across all 
> drives, and not in "one and one-quarter IO" on the average, which might delay 
> IOs to other stripes while some of the disks in a vdev are busy processing 
> leftovers of a previous request, and others are waiting for their peers.

I've never heard of this and it doesn't pass the sniff test. Can you cite a 
source? 

> In case of RAIDZ2 this recommendation leads to vdevs sized 6 (4+2), 10 (8+2) 
> or 18 (16+2) disks - the latter being mentioned in the original post.

A similar theory was disproved back in 2006 or 2007. I'd be very surprised if
there was a reliable way to predict the actual use patterns in advance. Features
like compression and I/O coalescing improve performance, but make the old
"rules of thumb" even more obsolete.

So, protect your data and if the performance doesn't meet your expectation, then
you can make adjustments.
 -- richard

> 
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to