In general you should not allow a Solaris system to be both an NFS server and
NFS client for the same filesystem, irrespective of whether zones are involved.
Among other problems, you can run into kernel deadlocks in some (rare)
circumstances. This is documented in the NFS administration docs.
James C. McPherson wrote:
The ws command hates it - hmm, the underlying device for
/scratch is /scratch maybe if I loop around stat()ing
it it'll turn into a pumpkin
:-)
As does dmake, which is a real PITA for a developer!
Ian
___
James C. McPherson wrote:
You can definitely loopback mount the same fs into multiple
zones, and as far as I can see you don't have the multiple-writer
issues that otherwise require Qfs to solve - since you're operating
within just one kernel instance.
Is there any significant performance
Bob Scheifler wrote:
James C. McPherson wrote:
You can definitely loopback mount the same fs into multiple
zones, and as far as I can see you don't have the multiple-writer
issues that otherwise require Qfs to solve - since you're operating
within just one kernel instance.
Is there any
You probably want to share pool/home as an NFS share then mount it in
the zones. The zfs file system itself can't actually be mounted to
multiple mountpoints, its not a shared filesystem like NFS or QFS.
zfs set sharenfs=on pool/home
then in the zones
mount globalzonehost:/home /home
Where
Well, ignore my post, a kernel engineer would know. I had no idea you
could loopback mount the same filesystem into multiple zones, or am I
missing something? This would certainly be more efficient than using nfs.
Lou
James C. McPherson wrote:
Bo Granlund wrote:
Hi,
[Sorry for
Lou Springer wrote:
Well, ignore my post, a kernel engineer would know. I had no idea you
could loopback mount the same filesystem into multiple zones, or am I
missing something? This would certainly be more efficient than using nfs.
Hi Lou,
no need to disparage yourself (at least in