Re: [zfs-discuss] mounting during boot

2006-09-17 Thread James C. McPherson

Krzys wrote:
...

So my system is booting up and I cannot login. aparently my service:
svc:/system/filesystem/local:default went into maitenance mode... 
somehow system could not mount those two items from vfstab:
/d/d2/downloads -  /d/d2/web/htdocs/downloads  lofs2   
yes -
/d/d1/home/cw/pics  -  /d/d2/web/htdocs/pics   lofs2   
yes -
I could not login and do anything, had to login trough console put my 
service
svc:/system/filesystem/local:default out of maitenance mode, clear 
maitenance state and all my services started to get going and system was 
no longer in single user mode...


That sucks a bit since how can I mount both UFS drives, then mount zfs 
and then get lofs mountpoints after?



...

To resolve your lofs mount issues I think you need to set the
mountpoint property for the various fs in the pool. To do that,
use the zfs command, eg:


#  zfs set mountpoint=/d/d2 nameofpool

(you didn't actually mention your pool's name).

On my system, I have a zfs called inout/kshtest and its
mountpoint is


$ zfs get mountpoint inout/kshtest
NAME PROPERTY   VALUE  SOURCE
inout/kshtestmountpoint /inout/kshtest default




You should also have a look at the legacy option in the zfs
manpage, which provides more details on how to get zpools and
zfs integrated into your system.






James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/pub/2/1ab/967

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: create ZFS pool(s)/volume(s) during jumpstart

2006-09-17 Thread Simon-Bernard Drolet
Hello,

From a couple of tests I've done (not totally finished yet!), you should look 
at zpool -R /a option to create zfs pool under /a from a finish script. The 
mountpoint attribute will be relative to /a.

I've done something like when in mini-root (llaunch a ksh from a finish to 
interactively try things):

zpool create -f -R /a myzfspool mirror c0t0d0s5 c0t2d0s5

Then I did a

zfs create myzfspool/opt
zfs set automountpoint=/opt myzfspool/opt

Then, I did

zpool export myzfspool

After, I wrote a script in /a/etc/rcS.d/S10zfs that is simply:
#!/usr/bin/ksh
#
case $1 in
  start)
/sbin/zpool import -f myzfspool
;;

esac

I rebooted and I get this:

Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d0493527   75243  36893217%/
/devices   0   0   0 0%/devices
ctfs   0   0   0 0%/system/contract
proc   0   0   0 0%/proc
mnttab 0   0   0 0%/etc/mnttab
swap 1338496 368 1338128 1%/etc/svc/volatile
objfs  0   0   0 0%/system/object
/dev/md/dsk/d60  4130982  125515 3964158 4%/usr
fd 0   0   0 0%/dev/fd
/dev/md/dsk/d30  10178315440  951322 1%/var
swap 1338128   0 1338128 0%/tmp
swap 1338144  16 1338128 1%/var/run
myzfspool  70189056  24 70188938 1%/myzfspool
myzfspool/opt70189056  24 70188938 1%/opt


Hope this helps...

Simon.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Re: Re: Comments on a ZFS multiple use of a pool,

2006-09-17 Thread Robert Milkowski
Hello Darren,

Thursday, September 14, 2006, 5:42:20 PM, you wrote:

  If you *never* want to import a pool automatically on reboot you just have 
  to delete the
  /etc/zfs/zpool.cache file before the zfs module is being loaded.
  This could be integrated into SMF.
 
 Or you could always use import -R / create -R for your pool management.  Of 
 course, there's no way to set a global default for these, so you have to 
 remember it each time, making the SMF solution more attractive

DD Perfect.  (although I have to try it).  In a cluster framework, the
DD cluster can remember to do it each time, so that shouldn't be an issue.



And that's exactly what SC32 does.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-17 Thread Robert Milkowski
Hello James,


I belive that storing hostid, etc. in a label and checking if it
matches on auto-import is the right solution.
Before it's implemented you can use -R right now with home-clusters
and don't worry about auto-import.

However maybe doing (optional) SCSI reservation on a pool would be a
good idea? Of course as an extra switch during/after import.
I know not all devices supports it but still. ZFS would allow either
reserve all disks in a pool or none.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: low disk performance

2006-09-17 Thread Gino Ruopolo
Other test, same setup.


SOLARIS10:

zpool/a   filesystem containing over 10Millions subdirs each containing 10 
files of about 1k
zpool/b   empty filesystem

rsync -avx  /zpool/a/* /zpool/b

time:  14 hours   (iostat showing %b = 100 for each lun in the zpool)

FreeBSD:
/vol1/a   dir containing over 10Millions subdirs each containing 10 files 
of about 1k
/vol1/b   empty dir

rsync -avx /vol1/a/* /vol1/b

time: 1h 40m !!

Also a zone running on zpool/zone1 was almost completely unusable because of 
i/o load.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool df mypool

2006-09-17 Thread Krzys
in man page it does say zpool df home dows work, when I do type it it does not 
work and I get the following error:

zpool df mypool
unrecognized command 'df'
usage: zpool command args ...
where 'command' is one of the following:

create  [-fn] [-R root] [-m mountpoint] pool vdev ...
destroy [-f] pool

add [-fn] pool vdev ...

list [-H] [-o field[,field]*] [pool] ...
iostat [-v] [pool] ... [interval [count]]
status [-vx] [pool] ...

online pool device ...
offline [-t] pool device ...
clear pool [device]

attach [-f] pool device new_device
detach pool device
replace [-f] pool device [new_device]

scrub [-s] pool ...

import [-d dir] [-D]
import [-d dir] [-D] [-f] [-o opts] [-R root] -a
import [-d dir] [-D] [-f] [-o opts] [-R root ] pool | id [newpool]
export [-f] pool ...
upgrade
upgrade -v
upgrade -a | pool

I am running Solaris 10 Update 2. Is my man page out of date or is my zfs not up 
to date?


Thanks.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: low disk performance

2006-09-17 Thread Krzys
That is bad such a big time difference... 14rs vs less than 2 hrs... did you 
have the same hardware setup? I did not follow up the thread...


Chris


On Sun, 17 Sep 2006, Gino Ruopolo wrote:


Other test, same setup.


SOLARIS10:

zpool/a   filesystem containing over 10Millions subdirs each containing 10 
files of about 1k
zpool/b   empty filesystem

rsync -avx  /zpool/a/* /zpool/b

time:  14 hours   (iostat showing %b = 100 for each lun in the zpool)

FreeBSD:
/vol1/a   dir containing over 10Millions subdirs each containing 10 files 
of about 1k
/vol1/b   empty dir

rsync -avx /vol1/a/* /vol1/b

time: 1h 40m !!

Also a zone running on zpool/zone1 was almost completely unusable because of 
i/o load.


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,450da906299689287932!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss