I beg to differ.

# cat /etc/release
                      Solaris 10 10/08 s10s_u6wos_07b SPARC
           Copyright 2008 Sun Microsystems, Inc.  All Rights Reserved.
                        Use is subject to license terms.
                            Assembled 27 October 2008
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
beA                        yes      no     no        yes    - [i]old ufs 
boot[/i]
beB                        yes      no     no        yes    - [i]old ufs 
boot[/i]
beC                        yes      yes    yes       no     - [i]new zfs 
root[/i]

# ludelete beA
ERROR: cannot open 'pool00/zones/global/home': dataset does not exist
ERROR: cannot mount mount point </.alt.tmp.b-QY.mnt/home> device 
<pool00/zones/global/home>
ERROR: failed to mount file system <pool00/zones/global/home> on 
</.alt.tmp.b-QY.mnt/home>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
ERROR: Cannot mount BE <beA>.
Unable to delete boot environment.
# ludelete beB
Determining the devices to be marked free.
Updating boot environment configuration database.
Updating compare databases on boot environment <beA>.
INFORMATION: Skipping update of boot environment <beA>: not configured properly.
Updating boot environment description database on all BEs.
Updating all boot environment configuration databases.
Boot environment <beB> deleted.
# ludelete beA
ERROR: cannot open 'pool00/zones/global/home': dataset does not exist
ERROR: cannot mount mount point </.alt.tmp.b-QY.mnt/home> device 
<pool00/zones/global/home>
ERROR: failed to mount file system <pool00/zones/global/home> on 
</.alt.tmp.b-QY.mnt/home>
ERROR: unmounting partially mounted boot environment file systems
ERROR: cannot mount boot environment by icf file </etc/lu/ICF.1>
ERROR: Cannot mount BE <beA>.
Unable to delete boot environment.

On this dev/test lab machine I've been bouncing between two UFS BEs (beA & beB) 
located in different slices on c1t0d0. My other three disks were in one zpool 
(pool00).
Big Mistake... For ZFS boot I need space for a seperate zfs root pool. So 
whilst booted under beB I backup my pool00 data, destroy pool00, re-create 
pool00 (a little differently, thus the error it would seem) but hold out one of 
the drives and use it to create a rpool00 root pool. Then I
# lucreate -n beC -p rpool01
# luactivate beC
# init 6

and reboot to the zfs boot/root. Voila!  But now I cannot delete beA. Anybody 
have any ideas on how I might ludelete beA?
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to