On Thu, 2010-06-03 at 08:50 -0700, Marty Scholes wrote:

> Maybe I have been unlucky too many times doing storage admin in the 90s, but 
> simple mirroring still scares me.  Even with a hot spare (you do have one, 
> right?) the rebuild window leaves the entire pool exposed to a single failure.
> 
> One of the nice things about zfs is that allows, "to each his own."  My home 
> server's main pool is 22x 73GB disks in a Sun A5000 configured as RAIDZ3.  
> Even without a hot spare, it takes several failures to get the pool into 
> trouble.

Perhaps you have been unlucky.  Certainly, there is a window with N+1
redundancy where a single failure leaves the system exposed in the face
of a 2nd fault.  This is a statistics game...   Mirrors made up of
multiple drives are of course substantially more risky than mirrors made
of just drive pairs.  I would strongly discourage multiple drive mirrors
unless the devices underneath the mirror are somehow configured in a way
that provides additional tolerance.  Such as a mirror of raidz devices.
Although, such a configuration would be a poor choice, since you'd take
a big performance penalty.

Of course, you can have more than a two-way mirror, at substantial
increased cost.

So you balance your needs.

RAIDZ2 and RAIDZ3 give N+2 and N+3 fault tolerance, and represent a
compromise weighted to fault tolerance and capacity, at a significant
penalty to performance (and as noted, the ability to increase capacity).

There certainly are applications where this is appropriate.  I doubt
most home users fall into that category.

Given a relatively small number of spindles (the 8 that was quoted), I
prefer RAID 1+0 with hot spares.  If I can invest in 8 drives, with 1TB
drives I can balance I/O across 3 spindles, get 3TB of storage, have N
+1.x tolerance (N+1, plus the ability to take up to two more faults as
long as they do not occur in the same pair of mirrored drives), and I
can easily grow to larger drives (for example the forthcoming 3TB
drives) when need and cost make that move appropriate.

        -- Garrett


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to