Hi all,
I had such high hopes that this would solve my problem. But it doesn't,
and I think it's telling me that my rpool is just too small to continue.
I've managed to promote opensolaris-5/opt, and destroyed all
the...-4/[EMAIL PROTECTED] pools, but:
[EMAIL PROTECTED]:~# zfs list
NAME
Hi,
little bit off-topic:
you can sometimes get into situation, where your ABE is clone of
some formerly used BE. To be able to destroy the BE from which
your currently booted BE (ABE) was snapshotted/cloned use `zfs
promote ABE`.
/j.
After image-update from SNV_89SNV_90SNV_91, quite a few zfs snapshots had
been automatically created, taking up space in my HD.
NAME
USED AVAIL REFER MOUNTPOINT
rpool
Welllost more than 1G of HD space after 2 system update _!
I tried destroy one of them ...but
cannot destroy 'rpool/ROOT/[EMAIL PROTECTED]:-:2008-06-24-14:42:33': snapshot
has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/opensolaris-1/opt
rpool/ROOT/opensolaris-1
I tried destroy one of them ...but
cannot destroy 'rpool/ROOT/[EMAIL PROTECTED]:-:2008-06-24-14:42:33': snapshot
has dependent clones
use '-R' to destroy the following datasets:
rpool/ROOT/opensolaris-1/opt
rpool/ROOT/opensolaris-1
Just want to know if it'd be safe to destroy the dependent
#beadm list
will tell you which BE is active, if the current one works properly,
it's safe to destroy another one. (beadm destroy)
After remove BE, you can use zfs destroy -r to remove the snapshot which
you don't want.
Thanks,
-Aubrey
On Wed, Jun 25, 2008 at 10:17 AM, wilson [EMAIL PROTECTED]
Thanks a lot , Aubrey
Just followed your advice and i managed to re-claim my once-lost 1G HD space by
destroying the Opensolaris-1's snapshot. As a bonus, my GRUB now looks neater
and cleaner as well :-)
--Just list out the steps i've taken
# Beadm list
BEActive Active on