See inline near then end...

Tomas Ögren wrote:
On 14 May, 2007 - Dale Sears sent me these 0,9K bytes:

I was wondering if this was a good setup for a 3320 single-bus,
single-host attached JBOD.  There are 12 146G disks in this array:

I used:

zpool create pool1 \
raidz2 c2t0d0 c2t1d0 c2t2d0 c2t3d0 c2t4d0 c2t5d0 c2t6d0 c2t8d0 c2t9d0 c2t10 \
spare c2t11d0 c2t12d0

(or something very similar)

This yields a 1TB file system with dual parity and two spare disks.

So first any two disks can fail at the same time.. then after
rebuilding, two more disks can fail.. until you've replaced a disk..

The cu is happy, but I wonder if there are any other suggestions for making
this array faster or more reliable or just "better" in your opinion. I know that "better" has different meanings under different application
conditions, so I'm just looking for folks to recommend a setup and
perhaps explain why they would do it that way.

That raid set will give you the same random IO performance as a single
disk. Sequential IO will be better than a single disk.

For instance splitting it into two raidz2 disks without spares can
survive any two disks within both groups (so 2 to 4 disks can fail
without data loss).. Random IO performance will be twice the single
raidz2/single disk.

What would that command look like?   Is this what you're saying?:

 zpool create pool1 \
 raidz2 c2t0d0 c2t1d0 c2t2d0 c2t3d0  c2t4d0  c2t5d0  \
 raidz2 c2t6d0 c2t8d0 c2t9d0 c2t10d0 c2t11d0 c2t12d0

Thanks!

Dale

/Tomas
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to