About the best I can see:

zpool create dirtypool raidz 250a 250b 320a raidz 320b 400a 400b raidz 500a
500b 750a

And you have to do them in that order. The zpool will create using the
smallest device. This gets you about 2140GB (500 + 640 + 1000) of space.
Your desired method is only 2880GB (720 * 4) and is WAY harder to setup and
maintain, especially if you get into the SDS configuration.

I, for one, welcome our convoluted configuration overlords. I'd also like to
see what the zpool looks like if it works. This is, obviously, untested.

chris


On Fri, Aug 22, 2008 at 11:03 AM, Nils Goroll <[EMAIL PROTECTED]>wrote:

> Hi,
>
> John wrote:
> > I'm setting up a ZFS fileserver using a bunch of spare drives. I'd like
> some redundancy and to maximize disk usage, so my plan was to use raid-z.
> The problem is that the drives are considerably mismatched and I haven't
> found documentation (though I don't see why it shouldn't be possible) to
> stripe smaller drives together to match bigger ones. The drives are: 1x750,
> 2x500, 2x400, 2x320, 2x250. Is it possible to accomplish the following with
> those drives:
> >
> > raid-z
> >    750
> >    500+250=750
> >    500+250=750
> >    400+320=720
> >    400+720=720
>
>
> Though I've never used this in production, it seems possible to layer ZFS
> on good old SDS (aka SVM, disksuite).
>
> At least I managed to create a trivial pool on
> what-10-mins-ago-was-my-swap-slice:
>
> haggis:/var/tmp# metadb -f -a -c 3 /dev/dsk/c5t0d0s7
> haggis:/var/tmp# metainit d10 1 1 /dev/dsk/c5t0d0s1
> d10: Concat/Stripe is setup
> haggis:/var/tmp# zpool create test /dev/md/dsk/d10
> haggis:/var/tmp# zpool status test
>  pool: test
>  state: ONLINE
>  scrub: none requested
> config:
>
>        NAME               STATE     READ WRITE CKSUM
>        test               ONLINE       0     0     0
>          /dev/md/dsk/d10  ONLINE       0     0     0
>
> So it looks like you could do the follwing:
>
> * Put a small slice (10-20m should suffice, by convention it's slice 7 on
> the first cylinders) on each of your disks and make them the metadb, if you
> are not using SDS already
>  metadb -f -a -c 3 <all your slices_7>
>
>  make slice 0 the remainder of each disk
>
> * for your 500/250G drives, create a concat (stripe not possible) for each
> pair. for clarity, I'd recommend to include the 750G disk as well (syntax
> from memory, apologies if I'm wrong with details):
>
> metainit d11 1 1 <700G disk>s0
> metainit d12 2 1 <500G disk>s0 1 <250G disk>s0
> metainit d13 2 1 <500G disk>s0 1 <250G disk>s0
> metainit d14 2 1 <400G disk>s0 1 <320G disk>s0
> metainit d15 2 1 <400G disk>s0 1 <320G disk>s0
>
> * create a raidz pool on your metadevices
>
> zpool create <name> raidz /dev/md/dsk/d11 /dev/md/dsk/d12 /dev/md/dsk/d13
> /dev/md/dsk/d14 /dev/md/dsk/d15
>
> Again: I have never tried this, so please don't blame me if this doesn't
> work.
>
> Nils
>
>
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
chris -at- microcozm -dot- net
=== Si Hoc Legere Scis Nimium Eruditionis Habes
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to