On Mon, Nov 23, 2009 at 10:31 AM, Alan Johnson <a...@datdec.com> wrote:

> On Mon, Nov 23, 2009 at 9:25 AM, Tom Buskey <t...@buskey.name> wrote:
>
>> I think the RAID 5 write hole refers to the slowdown on writes with RAID
>> 5.  In order to lose data, a 2nd drive needs to fail (as opposed to only 1
>> drive on a RAID 0 or JBOD).
>>
>
> According to
> http://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5_performance:
> "In the event of a system failure while there are active writes, the parity
> of a stripe may become inconsistent with the data. If this is not detected
> and repaired before a disk or block fails, data loss may ensue as incorrect
> parity will be used to reconstruct the missing block in that stripe. This
> potential vulnerability is sometimes known as the *write hole*.
> Battery-backed cache and similar techniques are commonly used to reduce the
> window of opportunity for this to occur. The same issue occurs for RAID-6."
>

Thanks for clarifying that for me.


>
>
>> I think most software RAID only does mirrors for boot.  RAID 1, not 5.
>>
>
> I have a Ubuntu 9.10 box that boots a RAID6 with GRUB2.  I expect that is
> very new, eh?
>

So your Ubuntu does software RAID6 on the boot disks with / and /boot?

Or you have a hardware RAID card doing RAID6?


>
>> RAID5 will have faster read performance then RAID 1 or a single disk.  It
>> might be faster for reads then RAID-0 (striping) also.
>>
>
> If the disks are a severe bottle neck, RAID5 can match RAID0 read speeds in
> theory.  However, I've never seen this in practice.  RAID5 cannot be faster
> than RAID0 unless something outside those definitions being at play.
>
>
I'm corrected.  RAID 5 will always have to do parity and RAID 0 does not.



> ZFS's RAIDZ ...RAIDZ2 ... RAIDZ3 which has 3 parity disks.
>>
>
> I know what you mean, but I'm just nit-picking here for clarification so as
> not to confuse the uninitiated: party disks are a thing of RAID3.  RAID5/6/Z
> all use distributed parity, so no one disk is dedicated to parities.  This
> is a big part of what makes rebuilds so slow on RAID5/6. The process is not
> as linear as a mirror or a RAID3 with dedicated parity drive.  How does
> RAIDZ do on a rebuild?
>

RAIDZ is a modified RAID5.  1 distributed set of parity .  Lose 1 disk w/o
failing
RAIDZ2 is modified RAID6.  2 sets of distributed parity.  Lose 2 disks 2/o
failing
RAIDZ3 is ummm.  3 sets of distributed parity.  Lose 3 disks 2/o failing

I once replaced my 120 GB drives with 500 GB drives to increase the pool.
It didn't seem slow to me, but..  You'll have to google :-/ to get real
numbers.  I suspect the speed is similar to RAID 5/6 rebuilds


>> ... ZFS ... ZFS ... ZFS fanboy and I'm very disappointed it won't be
>> adopted in Linux due to its license.  It's in FreeBSD (and FreeNAS). btrfs
>> looks like it has some nice improvements so I'm hoping to see it succeed
>> alongside ZFS.
>>
>
> Weeeee!  From all the theory I've read and watched, ZFS is the end game.
> I'm still trying to figure out how to work it into cloud storage.  Does
> FreeNAS some how enable ZFS over iSCSI?  I can't wrap my mind around that,
> but the benefits of ZFS on the minimal overhead of iSCSI (vs. NFS) would be
> ideal, if impossible.
>

ZFS will work on top of ISCSI SAN drives.  Or you can share out a partion
from a ZFS pool as an iSCSI target.


zpool create raidz mypool c0t0d0 c0t1d0 c0t2d0 # create a RAIDZ from 3 disks

# Create a home with 10GB, max, share it on NFS and compress the data as it
comes in
zfs create mypool/home
zfs set quota=10G mypool/home
zfs set compression=on mypool/home
zfs set sharenfs=on mypool/home

# Another one, but put it on iscsi
zfs create mypool/iSC
zfs set quota=10G mypool/iSC
zfs set compression=on mypool/iSC
zfs set shareiscsi=on mypool/iSC

They really got the CLI stuff right!


>
> I'm tempted to try Fuse+ZFS for our database servers, or even just to right
> to FreeBSD, but


I wouldn't touch *anything* FUSE for production work.  Well, I've used
NTFS-3G because I had to.


> that would be a hard sell in my company and I don't even want to try it
> without some lab work to back it up, which is not in the cards in the near
> future.
>


Get VirtualBox and play with FreeBSD/FreeNAS/Solaris/OpenSolaris inside it.
_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to