Hi,
I am using ZFS under Solaris 10u3.
After the defect of a 3510 Raid controller, I have several storage pools
with defect objects. "zpool status -xv" prints a long list:
DATASET OBJECT RANGE
4c0c 5dd lvl=0 blkid=2
28 b346lvl=0 blkid=9
Hi,
does anybody successfully try the option sharenfs=on for an zfs filesystem
with 1 users? On my system (sol10u2), that is not only awfully slow but
does also not work smoothly. I did run the following commands:
zpool create -R /test test c2t600C0FF00988193CD00CE701d0s0
zfs creat
Thank you, setting SHARE_NOINUSE_CHECK indeed speeds up things substantially.
However, there seems to be a bug in the NFS part of Solaris 10u2 when so many
filesystems
are shared. When I run "showmount -e" after the pool has been (successfully)
imported,
I get an error:
$ showmount -e
showmount