> what causes a dataset to get into this state? while I'm not exactly sure, I do have the steps leading up to when I saw it trying to create a snapshot. ie:
10 % zfs snapshot z/b80nd/[EMAIL PROTECTED] cannot create snapshot 'z/b80nd/[EMAIL PROTECTED]': dataset is busy 13 % mount -F zfs z/b80nd/var /z/b80nd/var mount: Mount point /z/b80nd/var does not exist. 14 % mount -F zfs z/b80nd/var /mnt 15 % zfs snapshot -r z/[EMAIL PROTECTED] 16 % zfs list | grep 0107 root/0107nd 455M 107G 6.03G legacy root/[EMAIL PROTECTED] 50.5M - 6.02G - z/[EMAIL PROTECTED] 0 - 243M - z/b80nd/[EMAIL PROTECTED] 0 - 1.18G - z/b80nd/[EMAIL PROTECTED] 0 - 2.25G - z/b80nd/[EMAIL PROTECTED] 0 - 56.3M - running 64bit opensol-20080107 on intel to get there I was walking through this cookbook: zfs snapshot root/[EMAIL PROTECTED] zfs clone root/[EMAIL PROTECTED] root/0107nd cat /etc/vfstab | sed s/^root/#root/ | sed s/^z/#z/ > /root/0107nd/ etc/vfstab echo "root/0107nd - / zfs - no -" >> /root/0107nd/etc/vfstab cat /root/0107nd/etc/vfstab zfs snapshot -r z/[EMAIL PROTECTED] rsync -a --del --verbose /usr/.zfs/snapshot/dump/ /root/0107nd/usr rsync -a --del --verbose /opt/.zfs/snapshot/dump/ /root/0107nd/opt rsync -a --del --verbose /var/.zfs/snapshot/dump/ /root/0107nd/var zfs set mountpoint=legacy root/0107nd zpool set bootfs=root/0107nd root reboot mkdir -p /z/tmp/bfu ; cd /z/tmp/bfu wget http://dlc.sun.com/osol/on/downloads/20080107/SUNWonbld.i386.tar.bz2 bzip2 -d -c SUNWonbld.i386.tar.bz2 | tar -xvf - pkgadd -d onbld wget http://dlc.sun.com/osol/on/downloads/20080107/on-bfu-nightly-osol-nd.i386.tar.bz2 bzip2 -d -c on-bfu-nightly-osol-nd.i386.tar.bz2 | tar -xvf - setenv FASTFS /opt/onbld/bin/i386/fastfs setenv BFULD /opt/onbld/bin/i386/bfuld setenv GZIPBIN /usr/bin/gzip /opt/onbld/bin/bfu /z/tmp/bfu/archives-nightly-osol-nd/i386 /opt/onbld/bin/acr echo etc/zfs/zpool.cache >> /boot/solaris/filelist.ramdisk ; echo bug in bfu reboot rm -rf /bfu* /.make* /.bfu* zfs snapshot root/[EMAIL PROTECTED] mount -F zfs z/b80nd/var /mnt ; echo bug in zfs zfs snapshot -r z/[EMAIL PROTECTED] zfs clone z/[EMAIL PROTECTED] z/0107nd zfs set compression=lzjb z/0107nd zfs clone z/b80nd/[EMAIL PROTECTED] z/0107nd/usr zfs clone z/b80nd/[EMAIL PROTECTED] z/0107nd/var zfs clone z/b80nd/[EMAIL PROTECTED] z/0107nd/opt rsync -a --del --verbose /.zfs/snapshot/dump/ /z/0107nd zfs set mountpoint=legacy z/0107nd/usr zfs set mountpoint=legacy z/0107nd/opt zfs set mountpoint=legacy z/0107nd/var echo "z/0107nd/usr - /usr zfs - yes -" >> /etc/vfstab echo "z/0107nd/var - /var zfs - yes -" >> /etc/vfstab echo "z/0107nd/opt - /opt zfs - yes -" >> /etc/vfstab reboot heh heh, booting from a clone of a clone... waisted space under root/`uname -v`/usr for a few libs needed at boot, but having /usr /var /opt on the compressed pool with two raidz vdevs boots to login in 45secs rather than 52secs on the single vdev root pool. _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss