On 08/02/09 18:25, Trevor Pretty wrote:
Ketan
Don't you need to do this?
Adding *ZFS* Volumes to a *Non*-*Global* *Zone*
http://docs.sun.com/app/docs/doc/819-5461/gbebi?l=en&a=view&q=ZFS+non+global+zone
If you've just mounted /oradb and it has other file systems mounted on
directories in the tree then they will also have to be in the non-global
zone.
Example:
You have /oradb mounted on c0t0d0s0
You have /oradb/fred/foo on c1t1d0s3
Then when you cd to /oradb/fred/foo unless c1t1d0s3 is also "accessible"
in the zone you will not be able to get to it.
There are various ways of doing this one is "loopback mounts":-
http://docs.sun.com/app/docs/doc/817-1592/gbnyo?l=en&a=view&q=loopback+mounts+global+zone
The original email showed five (5) separate file systems.
#df -h | grep oradb
/dev/dsk/emcpower174c 17G 5.1G 11G 31% /oradb/archa
/dev/dsk/emcpower177c 58G 3.3G 54G 6% /oradb/index1
/dev/dsk/emcpower172c 9.9G 610M 9.2G 7% /oradb/redob
/dev/dsk/emcpower176c 58G 30G 27G 53% /oradb/index2
/dev/dsk/emcpower180c 58G 35G 23G 61% /oradb/data1
On Solaris 10 5/09,
global# zonecfg -z zone1 info fs
fs:
dir: /a
special: /mydata
raw not specified
type: lofs
options: []
global# df -k |grep mydata
/dev/dsk/c1t0d0s7 466295962 65553 461567450 1% /mydata/a
/dev/dsk/c1t0d0s0 20177203 18301885 1673546 92% /mydata/os
d# mount |grep zone1 |grep mydata
/zones/zone1/root/a on /mydata read/write/setuid/devices/dev=800005 on
Mon Aug 3 08:13:14 2009
zone1# ls -l /a
total 4
drwxr-xr-x 3 root root 512 Aug 3 08:16 a
drwxr-xr-x 34 root root 1024 Jun 17 09:12 os
zone1# df -k /a
Filesystem kbytes used avail capacity Mounted on
/a 20177203 5140680 14834751 26% /a
zone1# df -k /a/a
df: Could not find mount point for /a/a
I imagine this is because both 'a' and 'os' are mount in the global
zone, yet not mounted for the non-global zone. File operations to access
the files work, however, file system type of operations, such as df,
fail, because the mounts aren't actually made on behalf of the
non-global zone.
So if I do a separate mount for each, it works as you expected.
global# zonecfg -z zone1 info fs
fs:
dir: /a/a
special: /mydata/a
raw not specified
type: lofs
options: []
fs:
dir: /a/os
special: /mydata/os
raw not specified
type: lofs
options: []
global# mount |grep zone1 |grep mydata
/zones/zone1/root/a/a on /mydata/a read/write/setuid/devices/dev=800007
on Tue Aug 4 17:40:44 2009
/zones/zone1/root/a/os on /mydata/os
read/write/setuid/devices/dev=800000 on Tue Aug 4 17:40:44 2009
zone1# df -k /a/a
Filesystem kbytes used avail capacity Mounted on
/a/a 466295962 65553 461567450 1% /a/a
zone1# df -k /a/os
Filesystem kbytes used avail capacity Mounted on
/a/os 20177203 18301885 1673546 92% /a/os
I guess this is subtle difference, and while other things work pretty
much the same, df does not between the two configurations. df works on
file systems, and there is only one 'file system' from zone1's
perspective in the first case, while two in the second case.
Steffen
Ketan wrote:
Yes you are correct i 'm able to access them from the local zone but the only problem is that if i do cd into any of the /oradb/* file system i get unable to get mount point
is there any way if i can put the df -h output from the global zone to local zone by any kind of shared file system ?
--
cid:[email protected]*Trevor Pretty *| Technical Account
Manager | +64 9 639 0652 | +64 21 666 161
*Eagle Technology Group Ltd.
*Gate D, Alexandra Park, Greenlane West, Epsom
Private Bag 93211, Parnell, Auckland
*/
*//*
*//*///*
www.eagle.co.nz <http://www.eagle.co.nz/>
This email is confidential and may be legally privileged. If received in
error please destroy and immediately notify us.
------------------------------------------------------------------------
_______________________________________________
zones-discuss mailing list
[email protected]
_______________________________________________
zones-discuss mailing list
[email protected]