Re: [zfs-discuss] zfs and raid 51

2009-02-21 Thread Fajar A. Nugraha
2009/2/18 Ragnar Sundblad :
> For our file- and mail servers we have been using mirrored raid-5
> chassises, with disksuite and ufs. This has served us well, and the

> By some reason that I haven't gotten
> yet, zfs doesn't allow you to put "raids" upon each other, like
> mirrors/stripes/parity raids on mirrors/stripes/parity raids, in a
> single pool.

Is there any reason why you don't want to use striped mirrors (i.e.
stripes of mirrored vdevs, a.k.a raid 10) with online spares? This
should provide high level of availability while greatly reducing
downtime needed for resilvering in the event of disk failure. And if
you're REALLY paranoid you could go with 3-way or more mirror for each
vdevs.

Regards,

Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs and raid 51

2009-02-20 Thread Peter Tribble
On Wed, Feb 18, 2009 at 2:47 PM, Ragnar Sundblad  wrote:
>
> I would very much appreciate some advice on this;
>
> For our file- and mail servers we have been using mirrored raid-5
> chassises, with disksuite and ufs. This has served us well, and the
> el-cheapo raid-5 chassises have failed several times without any
> downtime for our services.
>
> We are now looking for a modern but simple-to-handle-and-not-to-expensive
> replacement for these servers. We still like the idea of having
> raid-51 or similar security. By some reason that I haven't gotten
> yet, zfs doesn't allow you to put "raids" upon each other, like
> mirrors/stripes/parity raids on mirrors/stripes/parity raids, in a
> single pool. Maybe it is just not necessary since you can make pools
> out of pools.
>
> Now we could choose from:
> 1) mirroring all disks with disk suite and build a raidz (or possibly
>   raidz2) on those
> 2) creating two raidz[2] pools, creating a volume each on those, and
>   creating a third pool which is a zfs mirror of the volumes.

Why not just use zfs to mirror your hardware raid-5 chassis? You get the
hardware to manage disk failures, but have data redundancy provided by
zfs which is where you want it.

If you want random I/O performance, raidz isn't a good choice. For most
things, hardware raid ought to give you more IOPS. You mentioned mail
and file serving, which isn't an obvious match for raidz (which works better
for capacity and throughput).

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs and raid 51

2009-02-18 Thread Ragnar Sundblad


I would very much appreciate some advice on this;

For our file- and mail servers we have been using mirrored raid-5
chassises, with disksuite and ufs. This has served us well, and the
el-cheapo raid-5 chassises have failed several times without any
downtime for our services.

We are now looking for a modern but simple-to-handle-and-not-to- 
expensive

replacement for these servers. We still like the idea of having
raid-51 or similar security. By some reason that I haven't gotten
yet, zfs doesn't allow you to put "raids" upon each other, like
mirrors/stripes/parity raids on mirrors/stripes/parity raids, in a
single pool. Maybe it is just not necessary since you can make pools
out of pools.

Now we could choose from:
1) mirroring all disks with disk suite and build a raidz (or possibly
   raidz2) on those
2) creating two raidz[2] pools, creating a volume each on those, and
   creating a third pool which is a zfs mirror of the volumes.

Alternative 2 is obviously nicer since there is only one mechanism
involved and that needs to be supervised and handled instead of two, and
the track record for disk suite's ability to track disks that move and
other handling stuff could have been better. It would probably also
give us even a little more security, since hot spares and data disks
aren't paired.

But which setup would give us better performance?

Are there other issues we should consider when choosing?

Thank you in advance for advice and hints!

Ragnar Sundblad

Systems Specialist
Department of Computer Science and Communication
Royal Institute of Technology, Stockholm, Sweden

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss