Re: [zfs-discuss] mounting during boot

2006-09-17 Thread James C. McPherson

Krzys wrote:
...

So my system is booting up and I cannot login. aparently my service:
svc:/system/filesystem/local:default went into maitenance mode... 
somehow system could not mount those two items from vfstab:
/d/d2/downloads -  /d/d2/web/htdocs/downloads  lofs2   
yes -
/d/d1/home/cw/pics  -  /d/d2/web/htdocs/pics   lofs2   
yes -
I could not login and do anything, had to login trough console put my 
service
svc:/system/filesystem/local:default out of maitenance mode, clear 
maitenance state and all my services started to get going and system was 
no longer in single user mode...


That sucks a bit since how can I mount both UFS drives, then mount zfs 
and then get lofs mountpoints after?



...

To resolve your lofs mount issues I think you need to set the
mountpoint property for the various fs in the pool. To do that,
use the zfs command, eg:


#  zfs set mountpoint=/d/d2 nameofpool

(you didn't actually mention your pool's name).

On my system, I have a zfs called inout/kshtest and its
mountpoint is


$ zfs get mountpoint inout/kshtest
NAME PROPERTY   VALUE  SOURCE
inout/kshtestmountpoint /inout/kshtest default




You should also have a look at the legacy option in the zfs
manpage, which provides more details on how to get zpools and
zfs integrated into your system.






James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/pub/2/1ab/967

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mounting during boot

2006-09-16 Thread Krzys
Hello everyone, I just wanted to play with zfs just a bit before I start using 
it at my workplace on servers so I did set it up on my Solaris 10 U2 box.
I used to have all my disks mounted as UFS and everything was fine. I had my 
/etc/vfstab as such:

#
fd -   /dev/fd   fd -   no  -
/proc  -   /proc proc   -   no  -
/dev/dsk/c1t0d0s1  -   - swap   -   no  -
/dev/dsk/c1t0d0s0  /dev/rdsk/c1t0d0s0  / ufs1   no  logging
/dev/dsk/c1t0d0s6  /dev/rdsk/c1t0d0s6  /usr  ufs1   no  logging
/dev/dsk/c1t0d0s5  /dev/rdsk/c1t0d0s5  /var  ufs1   no  logging
/dev/dsk/c1t0d0s7  /dev/rdsk/c1t0d0s7  /d/d1 ufs2   yes logging
/devices   -   /devices  devfs  -   no  -
ctfs   -   /system/contractctfs-   no  -
objfs  -   /system/object  objfs   -   no  -
swap   -   /tmptmpfs   -   yes -
/dev/dsk/c1t1d0s7  /dev/rdsk/c1t1d0s7  /d/d2   ufs 2   yes logging
/d/d2/downloads -  /d/d2/web/htdocs/downloads  lofs2   yes -
/d/d1/home/cw/pics  -  /d/d2/web/htdocs/pics   lofs2   yes -

So I decided to put /d/d2 drive on zfs, created my pool, then did create zfs an 
dmounted it under /d/d2 while I did copy content od /d/d2 to my new zfs and then 
removed it from vfstab file.


Ok, so now line where is does say:
/dev/dsk/c1t1d0s7  /dev/rdsk/c1t1d0s7  /d/d2   ufs 2   yes logging
is commented out from my vfstab file. I rebooted my system just to get all my 
things started as I wanted (well I did bring all webservers and everything else 
down for the duration of copy so that nothing was accessing /d/d2 drive).


So my system is booting up and I cannot login. aparently my service:
svc:/system/filesystem/local:default went into maitenance mode... somehow system 
could not mount those two items from vfstab:

/d/d2/downloads -  /d/d2/web/htdocs/downloads  lofs2   yes -
/d/d1/home/cw/pics  -  /d/d2/web/htdocs/pics   lofs2   yes -
I could not login and do anything, had to login trough console put my service
svc:/system/filesystem/local:default out of maitenance mode, clear maitenance 
state and all my services started to get going and system was no longer in 
single user mode...


That sucks a bit since how can I mount both UFS drives, then mount zfs and then 
get lofs mountpoints after?


Also if certain dysks did not mount I used to go to /etc/vfstab and was able to 
see what was going on, now since zfs does not use vfstab how can I know what was 
mounted or not before system went down? Sometimes drives go bad, sometimes 
certain dysks are commented out in vfstab such as backup disks, with zfs it is 
controlled trough command line, what if I do not want to boot something at boot 
time? How can I distinguish what suppose to be mounted at boot and whats not 
uzing zfs list? is there a config file that I can just comment out few lines and 
be able to mount them at other times other than boot?


Thanks for suggestions... and sorry if this is wrong group to post such question 
since this is not a question about opensolaris but zfs on Solaris 10 Update 2.


Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss