On Fri, Mar 26, 2010 at 4:29 PM, Slack-Moehrle <mailingli...@mailnewsrss.com
> wrote:

> OK, so I made progress today. FreeBSD see's all of my drives, ZFS is acting
> correct.
>
> Now for me confusion.
>
> RAIDz3
>
> # zpool create datastore raidz3 da0 da1 da2 da3 da4 da5 da6 da7
> Gives: 'raidz3' no such GEOM providor
>
>
FreeBSD 7.3 includes ZFSv13.
FreeBSD 8.0 includes ZFSv13.
FreeBSD 8-STABLE currently includes ZFSv14 with work on-going to get ZFSv15
in.
FreeBSD 8.1 (released this summer) will, hopefully, include ZFSv15.

raidz3 support is not available in any of the above versions of ZFS.  Thus,
the error message.

You are limited to mirror, raidz1, and raidz2 vdevs in FreeBSD (for data
storage; there's also the log, cache, and spare vdev types available).

Hopefully, ZFSv20-something will be included when FreeBSD 9.0 is released.

# I am looking at the best practices guide and I am confused about adding a
> hot spare. Wont that happen with the above command or do I really just zpool
> create datastore raidz3 da0 da1 da2 da3 da4 da5 and then issue the hotspare
> command twice for da6 and da7?
>

All in one command:
  zpool create datastore raidz2 da0 da1 da2 da3 da4 da5 da6 spare da7

Or, as two separate commands:
  zpool create datastore raidz2 da0 da1 da2 da3 da4 da5 da6
  zpool add datastore spare da7

One thing you may want to do, is to label your disks using glabel(8).  That
way, if you re-arrange the drives, or swap controllers, or boot with a
missing drive, or add new drives, everything will continue to work
correctly.  While ZFS does it's own labelling of the drives, I've found it
to be quite fragile, in the sense that it requires a "zpool export" and
"zpool import" process, usually with a -f on the import.  (At least on
FreeBSD.)  In comparison, using glabel eliminates all those issues, and
happens below the ZFS layer, presenting ZFS an always-consistent view of the
hardware.

  glabel label disk01 da0
  glabel label disk02 da1
  glabel label disk03 da2
  glabel label disk04 da3
  glabel label disk05 da4
  glabel label disk06 da5
  glabel label disk07 da6
  glabel label disk08 da7

  zpool create datastore raidz2 label/disk01 label/disk02 label/disk03
label/disk04 label/disk05 label/disk06 label/disk07
  zpool add databases spare label/disk08

Thus, no matter what the underlying device node is (da0 could become ada6
 tomorrow if you switch to an AHCI controller, for example) the kernel will
map the drives correctly, and ZFS only have to worry about using
"label/disk01".

-- 
Freddie Cash
fjwc...@gmail.com
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to