Hi,

If this is not a zfs question please direct me to the correct place
for this question.

I have a server with Solaris 10 u6 zfs root file system and Solaris 9
zones along with Solaris 10 zones.

What is the best way to configure the root file system of a Solaris 9
container WRT zfs file system location and options?

The live upgrade guide for zfs file systems and zones says to put the
root of the zone in a sub-file system of the boot environment and set
canmount=noauto.

The svc:/system/filesystem/minimal:default
(/lib/svc/method/fs-minimal) mount s all the files ystem in the
current BE and life is good :)

The problem comes what you clone the BE to do patches etc for the GZ
and (Solaris 10) non GZ. using lucreate.

It doesn't clone the file systems root for the solaris 9 zone (because
it cant patch it from the global zone?)

When the new BE is activated the root for the solaris 9 zone is not
part of the new BE and is not mounted because its set to
canmount=noauto.

So I tried to move the root filesystem out of the BE and make it canmount=yes

This causes the boot to error svc:/system/filesystem/local:default and
place it in maintenance.

Currently I have the file system outside of the BE canmount=noauto.

Im thinking of writing a smallservice to mount all of the filesystem
of where I keep the nonnative on global zones.

Is there a better way of solving this issue?

How will sun like resolve this so that I can do more or less the same
thing and it will just work then the new patches.

Thanks

Peter
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to