Re: [zfs-discuss] Re: low disk performance

2006-09-17 Thread Krzys
That is bad such a big time difference... 14rs vs less than 2 hrs... did you 
have the same hardware setup? I did not follow up the thread...


Chris


On Sun, 17 Sep 2006, Gino Ruopolo wrote:


Other test, same setup.


SOLARIS10:

zpool/a   filesystem containing over 10Millions subdirs each containing 10 
files of about 1k
zpool/b   empty filesystem

rsync -avx  /zpool/a/* /zpool/b

time:  14 hours   (iostat showing %b = 100 for each lun in the zpool)

FreeBSD:
/vol1/a   dir containing over 10Millions subdirs each containing 10 files 
of about 1k
/vol1/b   empty dir

rsync -avx /vol1/a/* /vol1/b

time: 1h 40m !!

Also a zone running on zpool/zone1 was almost completely unusable because of 
i/o load.


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


!DSPAM:122,450da906299689287932!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool df mypool

2006-09-17 Thread Krzys
in man page it does say "zpool df home" dows work, when I do type it it does not 
work and I get the following error:

zpool df mypool
unrecognized command 'df'
usage: zpool command args ...
where 'command' is one of the following:

create  [-fn] [-R root] [-m mountpoint]   ...
destroy [-f] 

add [-fn]   ...

list [-H] [-o field[,field]*] [pool] ...
iostat [-v] [pool] ... [interval [count]]
status [-vx] [pool] ...

online   ...
offline [-t]   ...
clear  [device]

attach [-f]   
detach  
replace [-f]   [new_device]

scrub [-s]  ...

import [-d dir] [-D]
import [-d dir] [-D] [-f] [-o opts] [-R root] -a
import [-d dir] [-D] [-f] [-o opts] [-R root ]  [newpool]
export [-f]  ...
upgrade
upgrade -v
upgrade <-a | pool>

I am running Solaris 10 Update 2. Is my man page out of date or is my zfs not up 
to date?


Thanks.

Chris

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS layout on hardware RAID-5?

2006-09-17 Thread Marion Hakanson
Greetings,

I followed closely the thread "ZFS and Storage", and other discussions
about using ZFS on hardware RAID arrays, since we are deploying ZFS in
a similar situation here.  I'm sure I'm oversimplifying, but the consensus
for general filesystem-type storage needs, as I've read it, tends toward
doing ZFS RAID-Z (or RAID-Z2) on LUNS consisting of hardware RAID-0
stripes.  This gives good performance, and allows ZFS self-healing
properties, with reasonable space utilization, while taking advantage
of the NV cache in the array for write acceleration.  Well, that's the
approach that seems to match our needs, anyway.

However, the disk array we have (not a new purchase) is a Hitachi (HDS)
9520V, consisting mostly of SATA drives.  This array does not support
RAID-0 for some reason (one can guess that HDS does not want to provide
the ammunition for self-foot-shooting involving SATA drives).  Our pressing
question is how to configure a shelf of 15 400GB SATA drives, with the idea
that we may add another such shelf within a year.

Prior to ZFS, we likely would've setup two 6D+1P RAID-5 groups on that
shelf, leaving a single hot-spare, and applied UFS or SAM-QFS filesystems
onto hardware LUN's sliced out of those grous.  The use of two smaller
RAID-groups seems advisable given the likely large reconstruct time on
these 400GB 7200RPM drives.

Some options we're considering with ZFS are:

(0) One 13D+1P h/w RAID-5 group, one hot-spare, configured as 5-9 LUN's.
Setup ZFS pool of one RAID-Z group from all those LUN's.  With a
6-LUN RAID-Z group should have ~4333GB available space (9-LUN group
gives ~4622GB).  Some block-level recovery available, but an extra
helping of RAID-Z space overhead is lost.

(1) Two 6D+1P h/w RAID-5 groups, configured as 1 LUN each.  Run a simple
stripe ZFS pool consisting of those two LUN's.  The "con" here is that
there is no ZFS self-healing capability, though we do gain the other
ZFS features.  We rely on tape backups for any block-level corruption
recovery necessary.  The "pro" is there is no RAID-Z space overhead;
~4800GB available space.

(2) Same two h/w RAID-5 groups as (1), but configured as some larger
number of LUN's, say 5-9 LUN's each.  Setup a ZFS pool of two RAID-Z
groups consisting of those 5-9 LUN's each.  We gain some ZFS self-healing
here for block-level issues, but sacrifice some space (again, double
the single-layer RAID-5 space overhead).  With two 6-LUN RAID-Z groups,
should be ~4000GB available space.  With 9-LUN RAID-Z groups, ~4266GB.

(3) Three 4D+1P h/w RAID-5 groups, no hot spare, mapped to one LUN each.
Setup a ZFS pool of one RAID-Z group consisting of those three LUN's.
Only ~3200GB available space, but what looks like very good resiliency
in face of multiple disk failures.

(4) Same three h/w RG's as (3) above, but configured 5-9 LUN's each.  ZFS
pool of RAID-Z groups made from those LUN's. With 9-LUN RAID-Z groups,
looks like the same 4266GB as (2) above.


One of the unknowns I have, which hopefully the more experienced folks
can help with, is related to (0), (2) and (4) above.  I'm unsure of what
happens should a h/w RAID-5 group suffer a catastrophic problem, e.g. a
dual-drive failure.  Would all 5-9 LUN's on the failed RAID-5 group go away?
Or would just the affected blocks go away (two drives-worth), allowing
_some_ ZFS recovery to occur?  This makes the robustness of (0), (2) and
(4) uncertain to me.

Note that this particular pool of storage is intended to be served up
to clients via NFS and/or Samba, from a single T2000 file server.  We
hope to be able to scale this solution up to 100-200TB, by adding arrays
or JDBOD's to the current storage.

Suggestions, discussion, advice are welcome.

Thanks and regards,

Marion



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: low disk performance

2006-09-17 Thread Gino Ruopolo
Other test, same setup.


SOLARIS10:

zpool/a   filesystem containing over 10Millions subdirs each containing 10 
files of about 1k
zpool/b   empty filesystem

rsync -avx  /zpool/a/* /zpool/b

time:  14 hours   (iostat showing %b = 100 for each lun in the zpool)

FreeBSD:
/vol1/a   dir containing over 10Millions subdirs each containing 10 files 
of about 1k
/vol1/b   empty dir

rsync -avx /vol1/a/* /vol1/b

time: 1h 40m !!

Also a zone running on zpool/zone1 was almost completely unusable because of 
i/o load.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] no automatic clearing of "zoned" eh?

2006-09-17 Thread Robert Milkowski
Hello ozan,

Friday, September 15, 2006, 9:45:08 PM, you wrote:

osy> s10u2, once zoned, always zoned? i see that zoned property is not
osy> cleared after removing the dataset from a zone cfg or even
osy> uninstalling the entire zone... [right, i know how to clear it by
osy> hand, but maybe i am missing a bit of magic otherwise anodyne
osy> zonecfg et al.]

It's done that way on purpose. You can't trust this file system.

-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-17 Thread Robert Milkowski
Hello James,


I belive that storing hostid, etc. in a label and checking if it
matches on auto-import is the right solution.
Before it's implemented you can use -R right now with home-clusters
and don't worry about auto-import.

However maybe doing (optional) SCSI reservation on a pool would be a
good idea? Of course as an extra switch during/after import.
I know not all devices supports it but still. ZFS would allow either
reserve all disks in a pool or none.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re[2]: [zfs-discuss] Re: Re: Comments on a ZFS multiple use of a pool,

2006-09-17 Thread Robert Milkowski
Hello Darren,

Thursday, September 14, 2006, 5:42:20 PM, you wrote:

>> > If you *never* want to import a pool automatically on reboot you just have 
>> > to delete the
>> > /etc/zfs/zpool.cache file before the zfs module is being loaded.
>> > This could be integrated into SMF.
>> 
>> Or you could always use import -R / create -R for your pool management.  Of 
>> course, there's no way to set a global default for these, so you have to 
>> remember it each time, making the SMF solution more attractive

DD> Perfect.  (although I have to try it).  In a cluster framework, the
DD> cluster can remember to do it each time, so that shouldn't be an issue.



And that's exactly what SC32 does.


-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: create ZFS pool(s)/volume(s) during jumpstart

2006-09-17 Thread Simon-Bernard Drolet
Hello,

>From a couple of tests I've done (not totally finished yet!), you should look 
>at zpool -R /a option to create zfs pool under /a from a finish script. The 
>mountpoint attribute will be relative to /a.

I've done something like when in mini-root (llaunch a ksh from a finish to 
interactively try things):

zpool create -f -R /a myzfspool mirror c0t0d0s5 c0t2d0s5

Then I did a

zfs create myzfspool/opt
zfs set automountpoint=/opt myzfspool/opt

Then, I did

zpool export myzfspool

After, I wrote a script in /a/etc/rcS.d/S10zfs that is simply:
#!/usr/bin/ksh
#
case $1 in
  start)
/sbin/zpool import -f myzfspool
;;

esac

I rebooted and I get this:

Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d0493527   75243  36893217%/
/devices   0   0   0 0%/devices
ctfs   0   0   0 0%/system/contract
proc   0   0   0 0%/proc
mnttab 0   0   0 0%/etc/mnttab
swap 1338496 368 1338128 1%/etc/svc/volatile
objfs  0   0   0 0%/system/object
/dev/md/dsk/d60  4130982  125515 3964158 4%/usr
fd 0   0   0 0%/dev/fd
/dev/md/dsk/d30  10178315440  951322 1%/var
swap 1338128   0 1338128 0%/tmp
swap 1338144  16 1338128 1%/var/run
myzfspool  70189056  24 70188938 1%/myzfspool
myzfspool/opt70189056  24 70188938 1%/opt


Hope this helps...

Simon.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting during boot

2006-09-17 Thread James C. McPherson

Krzys wrote:
...

So my system is booting up and I cannot login. aparently my service:
svc:/system/filesystem/local:default went into maitenance mode... 
somehow system could not mount those two items from vfstab:
/d/d2/downloads -  /d/d2/web/htdocs/downloads  lofs2   
yes -
/d/d1/home/cw/pics  -  /d/d2/web/htdocs/pics   lofs2   
yes -
I could not login and do anything, had to login trough console put my 
service
svc:/system/filesystem/local:default out of maitenance mode, clear 
maitenance state and all my services started to get going and system was 
no longer in single user mode...


That sucks a bit since how can I mount both UFS drives, then mount zfs 
and then get lofs mountpoints after?



...

To resolve your lofs mount issues I think you need to set the
mountpoint property for the various fs in the pool. To do that,
use the zfs command, eg:


#  zfs set mountpoint=/d/d2 nameofpool

(you didn't actually mention your pool's name).

On my system, I have a zfs called "inout/kshtest" and its
mountpoint is


$ zfs get mountpoint inout/kshtest
NAME PROPERTY   VALUE  SOURCE
inout/kshtestmountpoint /inout/kshtest default




You should also have a look at the "legacy" option in the zfs
manpage, which provides more details on how to get zpools and
zfs integrated into your system.






James C. McPherson
--
Solaris kernel software engineer, system admin and troubleshooter
  http://www.jmcp.homeunix.com/blog
Find me on LinkedIn @ http://www.linkedin.com/pub/2/1ab/967

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss