On Thu, 2010-06-03 at 10:35 -0500, David Dyer-Bennet wrote:
> On Thu, June 3, 2010 10:15, Garrett D'Amore wrote:
> > Using a stripe of mirrors (RAID0) you can get the benefits of multiple
> > spindle performance, easy expansion support (just add new mirrors to the
> > end of the raid0 stripe), and 100% data redundancy.   If you can afford
> > to pay double for your storage (the cost of mirroring), this is IMO the
> > best solution.
> 
> Referencing "RAID0" here in the context of ZFS is confusing, though.  Are
> you suggesting using underlying RAID hardware to create virtual volumes to
> then present to ZFS, or what?

RAID0 is basically the default configuration of a ZFS pool -- its a
concatenation of the underlying vdevs.  In this case the vdevs should
themselves be two-drive mirrors. 

This of course has to be done in the ZFS layer, and ZFS doesn't call it
RAID0, any more than it calls a mirror RAID1, but effectively that's
what they are.

> 
> > Note that this solution is not quite as resilient against hardware
> > failure as raidz2 or raidz3.  While the RAID1+0 solution can tolerate
> > multiple drive failures, if both both drives in a mirror fail, you lose
> > data.
> 
> In a RAIDZ solution, two or more drive failures lose your data.  In a
> mirrored solution, losing the WRONG two drives will still lose your data,
> but you have some chance of surviving losing a random two drives.  So I
> would describe the mirror solution as more resilient.
> 
> So going to RAIDZ2 or even RAIDZ3 would be better, I agree.

>From a data resiliency point, yes, raidz2 or raidz3 offers better
protection.  At a significant performance cost.

Given enough drives, one could probably imagine using raidz3 underlying
vdevs, with RAID0 striping to spread I/O across multiple spindles.  I'm
not sure how well this would perform, but I suspect it would perform
better than straight raidz2/raidz3, but at a significant expense (you'd
need a lot of drives).

> 
> In an 8-bay chassis, there are other concerns, too.  Do I keep space open
> for a hot spare?  There's no real point in a hot spare if you have only
> one vdev; that is, 8-drive RAIDZ3 is clearly better than 7-drive RAIDZ2
> plus a hot spare.  And putting everything into one vdev means that for any
> upgrade I have to replace all 8 drives at once, a financial problem for a
> home server.

This is one of the reasons I don't advocate using raidz (any version)
for home use, unless you can't afford the cost in space represented by
mirroring and a hot spare or two.  (The other reason ... for my use at
least... is the performance cost.  I want to use my array to host
compilation workspaces, and for that I would prefer to get the most
performance out of my solution.  I suppose I could add some SSDs... but
I still think multiple spindles are a good option when you can do it.)

In an 8 drive chassis, without any SSDs involved,I'd configure 6 of the
drives as a 3 vdev stripe consisting of mirrors of 2 drives, and I'd
leave the remaining two bays as hot spares.  Btw, using the hot spares
in this way potentially means you can use those bays later to upgrade to
larger drives in the future, without offlining anything and without
taking too much of a performance penalty when you do so.

> 
> > If you're clever, you'll also try to make sure each side of the mirror
> > is on a different controller, and if you have enough controllers
> > available, you'll also try to balance the controllers across stripes.
> 
> I did manage to split the mirrors accross controllers (I have 6 SATA on
> the motherboard and I added an 8-port SAS card with SAS-SATA cabling).
> 
> > One way to help with that is to leave a drive or two available as a hot
> > spare.
> >
> > Btw, the above recommendation mirrors what Jeff Bonwick himself (the
> > creator of ZFS) has advised on his blog.
> 
> I believe that article directly influenced my choice, in fact.

Okay, good. :-)

        - Garrett


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to