If it's simply a question of making your home directory accessible
inside your local zone, is there any special reason you're doing it the
way you're doing it, mounting the filesystem using the legacy method?
Here is what the fs portion of my lx zone zonecfg looks like, for the
standard OpenSolaris rpool/export/home zfs filesystem:
fs:
dir: /home
special: /export/home
raw not specified
type: lofs
options: [nodevices]
Everything works fine.
On Tue, 2010-05-04 at 07:43 -0700, Harshal Marne wrote:
> Use case - In global zone, create a zfs filesystem and mount it using legacy
> method (mount -F zfs) and then export it to local zone through zonecfg.
>
> Issue - After this not able to reboot the zone.
>
> bash-3.00# mount | grep export
> /export/home on testpool4/home
> read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=401001d on Tue May 4
> 18:39:37 2010
> /export/home/hmarne on testpool4/home/hmarne
> read/write/setuid/devices/nonbmand/exec/xattr/atime/dev=401001f on Tue May 4
> 18:42:18 2010
>
> bash-3.00# zfs list | grep home
> testpool4/home 37K 460M 19K legacy
> testpool4/home/hmarne 18K 100M 18K legacy
>
> zonecfg:sv52-zone> add fs
> zonecfg:sv52-zone:fs> set dir=/export/home
> zonecfg:sv52-zone:fs> set special=testpool4/home/hmarne
> zonecfg:sv52-zone:fs> set type=zfs
> zonecfg:sv52-zone:fs> end
> zonecfg:sv52-zone> commit
> zonecfg:sv52-zone> exit
>
> [b]bash-3.00# zoneadm -z sv52-zone reboot
> zoneadm: zone 'sv52-zone': "/usr/lib/fs/zfs/mount testpool4/home/hmarne
> /space/zones/sv52-zone/root/export/home" failed with exit code 1[/b]
_______________________________________________
brandz-discuss mailing list
[email protected]