Hi Reshekel,

You might review these resources for information on using ZFS without
having to hack code:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/docs

ZFS Administration Guide

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

I will add a section on migrating from FreeBSD because this problem
comes up often enough. You might search the list archive for this
problem to see how others have resolved the partition issues.

Moving ZFS storage pools from a FreeBSD system to a Solaris system is
difficult because it looks like FreeBSD uses the disk's p0 partition
and in Solaris releases, ZFS storage pools are either created with
whole disks by using the d0 identifier or root pools, which are created
by using the disk slice identifier (s0). This is an existing boot
limitation.

For example, see the difference in the two pools:

# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
  pool: dozer
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        dozer       ONLINE       0     0     0
          c2t5d0    ONLINE       0     0     0
          c2t6d0    ONLINE       0     0     0

errors: No known data errors


If you want to boot from a ZFS storage pool then you must create the
pool with slices. This is why you see the message about EFI labels
because pools that are created with whole disks use an EFI label and
Solaris doesn't boot from an EFI label.

You can add a cache device to a pool reserved for booting, but you
must create a disk slice and then, add the cache device like this:

# zpool add rpool cache c1t2d0s0
# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror-0    ONLINE       0     0     0
            c1t0d0s0  ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
        cache
          c1t2d0s0    ONLINE       0     0     0


I suggest creating two pools, one small pool for booting and one larger
pool for data storage.

Thanks,

Cindy
On 05/25/10 02:58, Reshekel Shedwitz wrote:
Greetings -

I am migrating a pool from FreeBSD 8.0 to OpenSolaris (Nexenta 3.0 RC1). I am 
in what seems to be a weird situation regarding this pool. Maybe someone can 
help.

I used to boot off of this pool in FreeBSD, so the bootfs property got set:

r...@nexenta:~# zpool get bootfs tank
NAME  PROPERTY  VALUE   SOURCE
tank  bootfs    tank    local

The presence of this property seems to be causing me all sorts of headaches. I 
cannot replace a disk or add a L2ARC because the presence of this flag is how 
ZFS code (libzfs_pool.c: zpool_vdev_attach and zpool_label_disk) determines if 
a pool is allegedly a root pool.

r...@nexenta:~# zpool add tank cache c1d0
cannot label 'c1d0': EFI labeled devices are not supported on root pools.

To replace disks, I was able to hack up libzfs_zpool.c and create a new custom 
version of the zpool command. That works, but this is a poor solution going 
forward because I have to be sure I use my customized version every time I 
replace a bad disk.

Ultimately, I would like to just set the bootfs property back to default, but this seems to be beyond my ability. There are some checks in libzfs_pool.c that I can bypass in order to set the value back to its default of "-", but ultimately I am stopped because there is code in zfs_ioctl.c, which I believe is kernel code, that checks to see if the bootfs value supplied is actually an existing dataset.
I'd compile my own kernel but hey, this is only my first day using OpenSolaris 
- it was a big enough feat just learning how to compile stuff in the ON source 
tree :D

What should I do here? Is there some obvious solution I'm missing? I'd like to 
be able to get my pool back to a state where I can use the *stock* zpool 
command to maintain it. I don't boot off of this pool anymore and if I could 
somehow set the boot.

BTW, for reference, here is the output of zpool status (after I hacked up zpool 
to let me add a l2arc):

  pool: tank
 state: ONLINE
status: The pool is formatted using an older on-disk format.  The pool can
        still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'.  Once this is done, the
        pool will no longer be accessible on older software versions.
 scan: resilvered 351G in 2h44m with 0 errors on Tue May 25 23:33:38 2010
config:

        NAME          STATE     READ WRITE CKSUM
        tank          ONLINE       0     0     0
          raidz2-0    ONLINE       0     0     0
            c2t5d0p0  ONLINE       0     0     0
            c2t4d0p0  ONLINE       0     0     0
            c2t3d0p0  ONLINE       0     0     0
            c2t2d0p0  ONLINE       0     0     0
            c2t1d0p0  ONLINE       0     0     0
        cache
          c1d0        ONLINE       0     0     0

errors: No known data errors


Thanks,
Darren
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to