Re: [zfs-discuss] True in U4? "Tar and cpio...save and restore ZFS File attributes and ACLs"

2009-10-01 Thread Dennis Clarke

> Ray Clark  wrote:
>
>> Joerg, Thanks.  As you (of all people) know, this area is quite a
>> quagmire.


> Be careful! Sun tar creates non standard and thus non portable archives
> wich -E
> Only star can read them.
>
>> My next problem is that I want to do an exhaustive file compare
>> afterwards, and diff is not large-file aware.
>
> This is what star implements
>
>> I always wonder if or how these applications that run across every OS
>> known to man such as star can possibly be able to have the right code to
>> work around the idiosyncrasies and exploit the capabilities of all of
>> those OS's.  Should I consider star for the compare?  For the copy?
>> (Recognizing that it cannot do the ACLs, but I don't have those).
>
> Star of course supports ACLs. Star does not yet support ZFS ACLs and this
> is just a result of the fact that Sun did implement the same sort of
> design bugs
> in the first attempt to suport ACLs in ZFS. Future star versions will
> support ZFS ACLs as well.
>
> Jörg

I use star a great deal, daily in fact. I have two versions that I am
using because one of them seems to mysteriously create ACL's when I
perform a copy from one directory to another.

The two versions that I have are :

# /opt/csw/bin/star --version
star: star 1.5 (i386-pc-solaris2.8)

Options: acl find remote

Copyright (C) 1985, 88-90, 92-96, 98, 99, 2000-2009 J�g Schilling
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

# /opt/schily/bin/star --version
star: star 1.5a89 (i386-pc-solaris2.8)

Options: acl find remote

Copyright (C) 1985, 88-90, 92-96, 98, 99, 2000-2008 J�g Schilling
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.


One of them is built by you ( 1.5 ) and the other by me ( 1.5a89 ) with
smake and Studio 11.

Recently I had to copy a large collection of files from one dir to another
where the source was on ZFS and the destination was an NFSv4 share. The
end result had ACLs in it that should not be there :

# ls -lap
total 94
drwxr-xr-x   6 16411csw6 Sep 29 00:51 ./
drwxr-xr-x  22 16411csw   38 Sep 25 10:51 ../
drwxr-xr-x   5 16411csw   12 Sep 29 02:32
gcc-4.3.4-SunOS_5.10-corert/
drwxr-xr-x   5 16411csw   12 Sep 29 01:20
gcc-4.3.4-SunOS_5.10-g++rt/
drwxr-xr-x+  5 16411csw   19 Sep 29 03:50
gcc-4.3.4_SunOS_5.10-pkg/
drwxr-xr-x+ 22 root root  31 Sep 24 20:58
gcc-4.3.4_SunOS_5.10-release/

This seems impossible but this is what I see :

# getfacl gcc-4.3.4_SunOS_5.10-pkg
File system doesn't support aclent_t style ACL's.
See acl(5) for more information on Solaris ACL support.

The source dir looked like this :

$ ls -ladE gcc-4.3.4_SunOS_5.10-pkg
drwxr-xr-x   3 root root   5 2009-09-25 10:49:46.081206911
+ gcc-4.3.4_SunOS_5.10-pkg

The output at the other end of a copy with star looks like this :

# ls -lVdE gcc-4.3.4_SunOS_5.10-pkg
drwxr-xr-x+  5 16411csw   19 2009-09-29 03:50:03.038056700
+ gcc-4.3.4_SunOS_5.10-pkg
owner@:-DaA--cC-s:--:allow
owner@:--:--:deny
group@:--a---c--s:--:allow
group@:-D-A---C--:--:deny
 everyone@:--a---c--s:--:allow
 everyone@:-D-A---C--:--:deny
owner@:--:--:deny
owner@:rwxp---A-W-Co-:--:allow
group@:-w-p--:--:deny
group@:r-x---:--:allow
 everyone@:-w-p---A-W-Co-:--:deny
 everyone@:r-x---a-R-c--s:--:allow

Both the source and destination were on ZFS but the destination was a NFS
mount on the host that performed the copy.

The command was very typical :

/opt/csw/bin/star -copy -p -acl -sparse -dump -C src_dir1 . dest_dir

I am still baffled by the apparent ACL on a dir entry within a ZFS
filesystem when the source never had ACLs at all.

What I see is this :

$ $HOME/bin/star_1.5a89 --version
star: star 1.5a89 (i386-pc-solaris2.8)

Options: acl find remote

Copyright (C) 1985, 88-90, 92-96, 98, 99, 2000-2008 J�g Schilling
This is free software; see the source for copying conditions.  There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
$

$ mkdir /home/dclarke/test/destination

$ pwd
/build/dclarke

$ ls -ladin gcc-4.3.4_SunOS_5.10-release
509336 drwxr-xr-x  22 16411101   31 Sep 25 11:40
gcc-4.3.4_SunOS_5.10-release

$ $HOME/bin/star_1.5a89 -copy -p -acl -sparse -dump -time -fs=96m
-fifostats -C /build/dclarke gcc-4.3.4_SunOS_5.10-release
/home/dclarke/test/destination
gcc-4.3.4_SunOS_5.10-release/configure.lineno is sparse
gcc-4.3.4_SunOS_5.10-release/libiberty/hashtab.o is sparse
gcc-4.3.4_SunOS_5.10-release/libiberty/strverscmp.o is sparse
gcc-4.3.4_SunOS_5.10-release/libiberty/obj

Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Darren J Moffat

Ray Clark wrote:

Dynamite!

I don't feel comfortable leaving things implicit.  That is how misunderstandings happen.  


It isn't implicit it is explicitly inherited that is how ZFS is designed 
to (and does) work.



Would you please acknowlege that zfs send | zfs receive uses the checksum 
setting on the receiving pool instead of preserving the checksum algorithm used 
by the sending block?


For now it depends wither or not you pass -R to 'zfs send' or not. 
Without the -R argument the send stream does not have any properties in 
it so it will (by design) use those that would be used if the dataset 
was created  by 'zfs create'.


In the future there will be a distinction between the local and the 
received values see the recently (yesterday) approved case PSARC/2009/510:


http://arc.opensolaris.org/caselog/PSARC/2009/510/20090924_tom.erickson

Lets look at how it works just now:

portellen:pts/2# zpool create dummy c7t3d0
portellen:pts/2# zfs create dummy/home
portellen:pts/2# cp /etc/profile /dummy/home
portellen:pts/2# zfs get checksum dummy/home
NAMEPROPERTY  VALUE  SOURCE
dummy/home  checksum  on default
portellen:pts/2# zfs snapshot dummy/h...@1
portellen:pts/2# zfs set checksum=sha256 dummy
portellen:pts/2# zfs send dummy/h...@1 | zfs recv -F dummy/home.sha256
portellen:pts/2# zfs get checksum dummy/home.sha256
NAME   PROPERTY  VALUE  SOURCE
dummy/home.sha256  checksum  sha256 inherited from dummy

Now lets verify using zdb, we should have two plain file blocks 
(/etc/profile fits in a single ZFS block) one from the original 
dummy/home and one from the newly received home.sha256.


portellen:pts/2# zdb -vvv -S user:all dummy
0	2048	1	ZFS plain file	fletcher4	uncompressed 
8040e8f120:a2c635bc0556:73b5ba539e9699:3b4d66984ac9d6b4
0	2048	1	ZFS plain file	SHA256	uncompressed 
57f1e8168c58e8cf:3b20be148f57852e:f72ee8e3358f:1bfae4ae0599577c




--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] True in U4? "Tar and cpio...save and restore ZFS File attributes and ACLs"

2009-10-01 Thread Joerg Schilling
Dennis Clarke  wrote:

> I use star a great deal, daily in fact. I have two versions that I am
> using because one of them seems to mysteriously create ACL's when I
> perform a copy from one directory to another.
>
> The two versions that I have are :
>
> # /opt/csw/bin/star --version
> star: star 1.5 (i386-pc-solaris2.8)

> # /opt/schily/bin/star --version
> star: star 1.5a89 (i386-pc-solaris2.8)

> One of them is built by you ( 1.5 ) and the other by me ( 1.5a89 ) with
> smake and Studio 11.

In star-1.5a89 ACLs don't work if you use -find.

> Recently I had to copy a large collection of files from one dir to another
> where the source was on ZFS and the destination was an NFSv4 share. The
> end result had ACLs in it that should not be there :
>
> # ls -lap
> total 94
> drwxr-xr-x   6 16411csw6 Sep 29 00:51 ./
> drwxr-xr-x  22 16411csw   38 Sep 25 10:51 ../
> drwxr-xr-x   5 16411csw   12 Sep 29 02:32
> gcc-4.3.4-SunOS_5.10-corert/
> drwxr-xr-x   5 16411csw   12 Sep 29 01:20
> gcc-4.3.4-SunOS_5.10-g++rt/
> drwxr-xr-x+  5 16411csw   19 Sep 29 03:50
> gcc-4.3.4_SunOS_5.10-pkg/
> drwxr-xr-x+ 22 root root  31 Sep 24 20:58
> gcc-4.3.4_SunOS_5.10-release/

If you like to investigate on this problem, you would need to
keep the source directory tree and send the commandline you used
for copying. In the next step we would need to investigate on the
tar archive that was created by star

> The source dir looked like this :
>
> $ ls -ladE gcc-4.3.4_SunOS_5.10-pkg
> drwxr-xr-x   3 root root   5 2009-09-25 10:49:46.081206911
> + gcc-4.3.4_SunOS_5.10-pkg
>
> The output at the other end of a copy with star looks like this :
>
> # ls -lVdE gcc-4.3.4_SunOS_5.10-pkg
> drwxr-xr-x+  5 16411csw   19 2009-09-29 03:50:03.038056700
> + gcc-4.3.4_SunOS_5.10-pkg
> owner@:-DaA--cC-s:--:allow
> owner@:--:--:deny
> group@:--a---c--s:--:allow
> group@:-D-A---C--:--:deny
>  everyone@:--a---c--s:--:allow
>  everyone@:-D-A---C--:--:deny
> owner@:--:--:deny
> owner@:rwxp---A-W-Co-:--:allow
> group@:-w-p--:--:deny
> group@:r-x---:--:allow
>  everyone@:-w-p---A-W-Co-:--:deny
>  everyone@:r-x---a-R-c--s:--:allow
>
> Both the source and destination were on ZFS but the destination was a NFS
> mount on the host that performed the copy.

It may be that you are hit by a bug in Sun's NFS implementation.

Sun's tar implemantation definitely had several critical bugs in 2005 related 
to ACLs. Given the fact that many critical bugreports I made for Solaris did 
noit end in a fix, it may be that the related bug still exists. The bug I am
currently talking is the one that causes unpacked files to get ACLs although 
the source file has no ACLs (this is independent from the unterlying file 
system). The reason is that Sun tar does not _clear_ ACLs on files if the
source file has no ACLs. The destination file however may have inherited ACLs
from the dierectory. This is a serious security problem in Sun tar.

It could be that Sun's NFS implementation _creates_ ACLs when star sends a 
request to _clear_ the ACLs by establishing "base ACLs" that just contain
the UNIX file permissins. From the Sun documentation, this needs to remove
existing ACLs but it may do something else if there is a bug in the NFS 
implementation.

> The command was very typical :
>
> /opt/csw/bin/star -copy -p -acl -sparse -dump -C src_dir1 . dest_dir
>
> I am still baffled by the apparent ACL on a dir entry within a ZFS
> filesystem when the source never had ACLs at all.
>
> What I see is this :
>
> $ $HOME/bin/star_1.5a89 --version
> star: star 1.5a89 (i386-pc-solaris2.8)

Instead of using an old version with known bugs, you could just omit -acl 
with a recent version that has no known bugs.

> /home/dclarke/bin/star_1.5a89: 0 blocks + 1593968128 bytes (total of
> 1593968128 bytes = 1556609.50k).
> /home/dclarke/bin/star_1.5a89: Total time 1735.341sec (897 kBytes/sec)
> $
>
> I should mention really poor performance also.

If you don't care about granted integrity of the copy, you could try 
add -no-fsync

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] True in U4? "Tar and cpio...save and restore ZFS File attributes and ACLs"

2009-10-01 Thread Joerg Schilling
joerg.schill...@fokus.fraunhofer.de (Joerg Schilling) wrote:

> Dennis Clarke  wrote:

> It could be that Sun's NFS implementation _creates_ ACLs when star sends a 
> request to _clear_ the ACLs by establishing "base ACLs" that just contain
> the UNIX file permissins. From the Sun documentation, this needs to remove
> existing ACLs but it may do something else if there is a bug in the NFS 
> implementation.

I forgot to mention why this may be related to Sun tar:

If you test with buggy tools, you may not see that the OS kernel does
not behave correctly.

Jörg

-- 
 EMail:jo...@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin
   j...@cs.tu-berlin.de(uni)  
   joerg.schill...@fokus.fraunhofer.de (work) Blog: 
http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Comments on home OpenSolaris/ZFS server

2009-10-01 Thread Eugen Leitl
On Wed, Sep 30, 2009 at 05:03:21PM -0700, Brandon High wrote:

> Supermicro has a 3 x 5.25" bay rack that holds 5 x 3.5" drives. This
> doesn't leave space for a optical drive, but I used a USB drive to
> install the OS and don't need it anymore.

I've had such a bay rack for years, and it survived one big tower,
and is now dwelling in a cheap Sharkoon case. The fan is a bit noisy,
but then, the server is behind a couple of doors, and serves the
house LAN. It's currently running Linux, but has already a FreeNAS
on an IDE DOM preinstalled.

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Unable to import pool: invalid vdev configuration

2009-10-01 Thread Osvald Ivarsson
I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to my 
motherboard. The raid, a raidz, which is called "rescamp", has worked good 
before until a power failure yesterday. I'm now unable to import the pool. I 
can't export the raid, since it isn't imported.

# zpool import rescamp
cannot import 'rescamp': invalid vdev configuration

# zpool import
  pool: rescamp
id: 12297694211509104163
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

rescamp UNAVAIL  insufficient replicas
  raidz1UNAVAIL  corrupted data
c15d0   ONLINE
c14d0   ONLINE
c14d1   ONLINE

I've tried using zdb -l on all three disks, but in all cases it failes to 
unpack the labels.

# zdb -l /dev/dsk/c14d0

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3

If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 and 
c15d0 is what I created the raid with. I do find labels this way for all three 
disks. Is this to any help?

# zdb -l /dev/dsk/c14d1s0

LABEL 0

version=13
name='rescamp'
state=0
txg=218097573
pool_guid=12297694211509104163
hostid=4925114
hostname='slaskvald'
top_guid=9479723326726871122
guid=17774184411399278071
vdev_tree
type='raidz'
id=0
guid=9479723326726871122
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=3000574672896
is_log=0
children[0]
type='disk'
id=0
guid=9020535344824299914
path='/dev/dsk/c15d0s0'
devid='id1,c...@ast31000333as=9te0dglf/a'
phys_path='/p...@0,0/pci-...@11/i...@1/c...@0,0:a'
whole_disk=1
DTL=102
children[1]
type='disk'
id=1
guid=14384361563876398475
path='/dev/dsk/c14d0s0'
devid='id1,c...@asamsung_hd103uj=s13pjdws690618/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@0,0:a'
whole_disk=1
DTL=216
children[2]
type='disk'
id=2
guid=17774184411399278071
path='/dev/dsk/c14d1s0'
devid='id1,c...@ast31000333as=9te0de8w/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@1,0:a'
whole_disk=1
DTL=100

LABEL 1

version=13
name='rescamp'
state=0
txg=218097573
pool_guid=12297694211509104163
hostid=4925114
hostname='slaskvald'
top_guid=9479723326726871122
guid=17774184411399278071
vdev_tree
type='raidz'
id=0
guid=9479723326726871122
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=3000574672896
is_log=0
children[0]
type='disk'
id=0
guid=9020535344824299914
path='/dev/dsk/c15d0s0'
devid='id1,c...@ast31000333as=9te0dglf/a'
phys_path='/p...@0,0/pci-...@11/i...@1/c...@0,0:a'
whole_disk=1
DTL=102
children[1]
type='disk'
id=1
guid=14384361563876398475
path='/dev/dsk/c14d0s0'
devid='id1,c...@asamsung_hd103uj=s13pjdws690618/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@0,0:a'
whole_disk=1
DTL=216
children[2]
type='disk'
id=2
guid=17774184411399278071
path='/dev/dsk/c14d1s0'
devid='id1,c...@ast31000333as=9te0de8w/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@1,0:a'
whole_disk=1
DTL=100

LABEL 2

version=13
name='rescamp'
state=0
txg=218097573
pool_guid=12297694211509104163
hostid=4925114
hostname='slaskvald'
top_guid=9479723326726871122
guid=17774184411399278071
vdev_tree
type='raidz'
id=0
guid=9479723326726871122
nparity=1
metasl

Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Frank Middleton

On 10/01/09 05:08 AM, Darren J Moffat wrote:


In the future there will be a distinction between the local and the
received values see the recently (yesterday) approved case PSARC/2009/510:

http://arc.opensolaris.org/caselog/PSARC/2009/510/20090924_tom.erickson


Currently non-recursive incremental streams send properties and full
streams don't. Will the "p" flag reverse its meaning for incremental
streams? For my purposes the current behavior is the exact opposite
of what I need and it isn't obvious that the case addresses this
peculiar inconsistency without going through a lot of hoops. I suppose
the new properties can be sent initially so that subsequent incremental
streams won't override the possibly changed local properties, but that
seems so complicated :-). If I understand the case correctly, we can
now set a flag that says "ignore properties sent by any future incremental
non-recursive stream". This instead of having a flag for incremental
streams that says "don't send properties". What happens if sometimes
we do and sometimes we don't? Sounds like a static property when a
dynamic flag is really what is wanted and this is a complicated way of
working around a design inconsistency. But maybe I missed something :-)

So what would the semantics of the new "p" flag be for non-recursive
incremental streams?

Thanks -- Frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "Hot Space" vs. hot spares

2009-10-01 Thread paul

> Yes, this is something that should be possible once we have bp rewrite
> (the
> ability to move blocks around).
[snip]
> FYI, I am currently working on bprewrite for device removal.
>
> --matt

That's very cool. I don't code (much/enough to help), but I'd like to help
if I can. If nothing else, my wife makes a mean chocolate chip cookie!
Think a batch of those would help?

Paul Archer

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Ray Clark
Darren, thank you very much!  Not only have you answered my question, you have 
made me aware of a tool to verify, and probably do alot more (zdb).

Can you comment on my concern regarding what checksum is used in the base zpool 
before anything is created in it?  (No doubt my terminology is wrong, but you 
get the idea I am sure).  

The single critical feature of ZFS is debatably that every block on ZFS is 
checksummed to enable detection of corruption, but it appears that the user 
does not have the ability to choose the checksum for the highest levels of the 
pool itself.  Given the issue with fletcher2, this is of concern!  Since this 
"activity" was kicked off by a "Corrupt Metadata" ZFS-8000-CS, I am trying to 
move away from fletcher2.  Don't know if that was the cause, but my goal is to 
restore the "safety" that we went to ZFS for.

Is my understanding correct?
Are there ways to control the checksum algorithm on the empty zpool?

Thanks, again.

--Ray
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] mounting rootpool

2009-10-01 Thread camps support
I have a system that is having issues with the pam.conf.

I have booted to cd but am stuck at how to mount the rootpool in single-user.  
I need to make some changes to the pam.conf but am not sure how to do this. 

Thanks in advance.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Michael Schuster

On 01.10.09 07:20, camps support wrote:

I have a system that is having issues with the pam.conf.

I have booted to cd but am stuck at how to mount the rootpool in single-user.  I need to make some changes to the pam.conf but am not sure how to do this. 


I think "zpool import" should be the first step for you.

HTH
--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread camps support
I did zpool import -R /tmp/z rootpool

It only mounted /export and /rootpool only had /boot and /platform.

I need to be able to get /etc and /var?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] "Hot Space" vs. hot spares

2009-10-01 Thread Bob Friesenhahn

On Wed, 30 Sep 2009, Richard Elling wrote:

a big impact. With 2+ TB drives, the resilver time is becoming dominant.
As disks becoming larger and not faster, there will be a day when the
logistical response time will become insignificant. In other words, you
won't need a spare to improve logistical response, but you can consider
using spares to extend logistical response time to months. To take this
argument to its limit, it is possible that in our lifetime RAID boxes will
be disposable... the razor industry will be proud of us ;-)


Unless there is a dramatic increase in disk bandwidth, there is a 
point where disk storage size becomes unmanageable.  This is the point 
where we should transition from 3-1/2" disk to 2-1/2" disks with 
smaller storage sizes.  I see that 2-1/2" disks are already up to 
500GB.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Richard Elling

On Oct 1, 2009, at 7:10 AM, Ray Clark wrote:

Darren, thank you very much!  Not only have you answered my  
question, you have made me aware of a tool to verify, and probably  
do alot more (zdb).


Can you comment on my concern regarding what checksum is used in the  
base zpool before anything is created in it?  (No doubt my  
terminology is wrong, but you get the idea I am sure).


The single critical feature of ZFS is debatably that every block on  
ZFS is checksummed to enable detection of corruption, but it appears  
that the user does not have the ability to choose the checksum for  
the highest levels of the pool itself.  Given the issue with  
fletcher2, this is of concern!  Since this "activity" was kicked off  
by a "Corrupt Metadata" ZFS-8000-CS, I am trying to move away from  
fletcher2.  Don't know if that was the cause, but my goal is to  
restore the "safety" that we went to ZFS for.


Is my understanding correct?
Are there ways to control the checksum algorithm on the empty zpool?


You can set both zpool (-o option) and zfs (-O option) options when you
create the zpool. See zpool(1m)
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Lori Alt

On 10/01/09 09:25, camps support wrote:

I did zpool import -R /tmp/z rootpool

It only mounted /export and /rootpool only had /boot and /platform.

I need to be able to get /etc and /var?
  
You need to explicitly  mount the root file system (its canmount 
property is set to "noauto", which means it isn't mounted automatically 
when the pool is import)


do:

# zfs mount rootpool/ROOT/

for the appopriate value of .


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unable to import pool: invalid vdev configuration

2009-10-01 Thread Victor Latushkin

On 01.10.09 17:54, Osvald Ivarsson wrote:

I'm running OpenSolaris build svn_101b. I have 3 SATA disks connected to my motherboard. 
The raid, a raidz, which is called "rescamp", has worked good before until a 
power failure yesterday. I'm now unable to import the pool. I can't export the raid, 
since it isn't imported.

# zpool import rescamp
cannot import 'rescamp': invalid vdev configuration

# zpool import
  pool: rescamp
id: 12297694211509104163
 state: UNAVAIL
action: The pool cannot be imported due to damaged devices or data.
config:

rescamp UNAVAIL  insufficient replicas
  raidz1UNAVAIL  corrupted data
c15d0   ONLINE
c14d0   ONLINE
c14d1   ONLINE

I've tried using zdb -l on all three disks, but in all cases it failes to 
unpack the labels.

# zdb -l /dev/dsk/c14d0

LABEL 0

failed to unpack label 0

LABEL 1

failed to unpack label 1

LABEL 2

failed to unpack label 2

LABEL 3

failed to unpack label 3

If I run # zdb -l /dev/dsk/c14d0s0 I do find 4 labels, but c14d0, c14d1 and 
c15d0 is what I created the raid with. I do find labels this way for all three 
disks. Is this to any help?

# zdb -l /dev/dsk/c14d1s0

LABEL 0

version=13
name='rescamp'
state=0
txg=218097573
pool_guid=12297694211509104163
hostid=4925114
hostname='slaskvald'
top_guid=9479723326726871122
guid=17774184411399278071
vdev_tree
type='raidz'
id=0
guid=9479723326726871122
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=3000574672896
is_log=0
children[0]
type='disk'
id=0
guid=9020535344824299914
path='/dev/dsk/c15d0s0'
devid='id1,c...@ast31000333as=9te0dglf/a'
phys_path='/p...@0,0/pci-...@11/i...@1/c...@0,0:a'
whole_disk=1
DTL=102
children[1]
type='disk'
id=1
guid=14384361563876398475
path='/dev/dsk/c14d0s0'
devid='id1,c...@asamsung_hd103uj=s13pjdws690618/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@0,0:a'
whole_disk=1
DTL=216
children[2]
type='disk'
id=2
guid=17774184411399278071
path='/dev/dsk/c14d1s0'
devid='id1,c...@ast31000333as=9te0de8w/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@1,0:a'
whole_disk=1
DTL=100

LABEL 1

version=13
name='rescamp'
state=0
txg=218097573
pool_guid=12297694211509104163
hostid=4925114
hostname='slaskvald'
top_guid=9479723326726871122
guid=17774184411399278071
vdev_tree
type='raidz'
id=0
guid=9479723326726871122
nparity=1
metaslab_array=23
metaslab_shift=34
ashift=9
asize=3000574672896
is_log=0
children[0]
type='disk'
id=0
guid=9020535344824299914
path='/dev/dsk/c15d0s0'
devid='id1,c...@ast31000333as=9te0dglf/a'
phys_path='/p...@0,0/pci-...@11/i...@1/c...@0,0:a'
whole_disk=1
DTL=102
children[1]
type='disk'
id=1
guid=14384361563876398475
path='/dev/dsk/c14d0s0'
devid='id1,c...@asamsung_hd103uj=s13pjdws690618/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@0,0:a'
whole_disk=1
DTL=216
children[2]
type='disk'
id=2
guid=17774184411399278071
path='/dev/dsk/c14d1s0'
devid='id1,c...@ast31000333as=9te0de8w/a'
phys_path='/p...@0,0/pci-...@11/i...@0/c...@1,0:a'
whole_disk=1
DTL=100

LABEL 2

version=13
name='rescamp'
state=0
txg=218097573
pool_guid=12297694211509104163
hostid=4925114
hostname='slaskvald'
top_guid=9479723326726871122
guid=17774184411399278071
vdev_tree
type='raidz'
id=0
guid=94797233

Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Ross
Ray, if you don't mind me asking, what was the original problem you had on your 
system that makes you think the checksum type is the problem?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread David Stewart
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools.  The 
sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively.  The man page 
for RAIDZ states that "The raidz vdev type is an alias for raidz1."  So why was 
there a difference between the sizes for RAIDZ and RAIDZ1?  Shouldn't the size 
be the same for "zpool create raidz ..." and "zpool create raidz1 ..." if I am 
using the exact same drives?

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread Cindy Swearingen

Hi David,

Which Solaris release is this?

Are you sure you are using the same ZFS command to review the sizes
of the raidz1 and raidz pools? The zpool list and zfs list commands
will display different values.

See the output below of my tank pool created with raidz or raidz1
redundancy. The pool sizes that are created identical on Nevada build
124.

Cindy

# zpool create tank raidz c0t5d0 c0t6d0 c0t7d0
# zpool list tank
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank   408G   144K   408G 0%  ONLINE  -
# zpool destroy tank
# zpool create tank raidz1 c0t5d0 c0t6d0 c0t7d0
# zpool list tank
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
tank   408G   144K   408G 0%  ONLINE  -
# cat /etc/release
 Solaris Express Community Edition snv_124 SPARC
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 21 September 2009


On 10/01/09 11:54, David Stewart wrote:

So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools.  The sizes for the pools were 5.3TB, 
4.0TB, and 2.67TB respectively.  The man page for RAIDZ states that "The raidz vdev type is an alias for 
raidz1."  So why was there a difference between the sizes for RAIDZ and RAIDZ1?  Shouldn't the size be 
the same for "zpool create raidz ..." and "zpool create raidz1 ..." if I am using the 
exact same drives?

David

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when "No space left on device"...

2009-10-01 Thread Rudolf Potucek
Hmm ... I understand this is a bug, but only in the sense that the message is 
not sufficiently descriptive. Removing the file from the source filesystem will 
not necessarily free any space because the blocks have to be retained in the 
snapshots. The same problem exists for zeroing the file with >file as suggested 
earlier.

It seems like the appropriate solution would be to have a tool that allows 
removing a file from one or more snapshots at the same time as removing the 
source ... 

  Rudolf
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Ray Clark
U4 zpool does not appear to support the -o option...   Reading a current zpool 
manpage online lists the valid properties for the current zpool -o, and 
checksum is not one of them.  Are you mistaken or am I missing something?

Another thought is that *perhaps* all of the blocks that comprise an empty 
zpool are re-written sooner or later, and once the checksum is changed with 
"zfs set checksum=sha256 zfs01" (The pool name) they will be re-written with 
the new checksum very soon anyway.  Is this true?  This would require an 
understanding of the on-disk structure and when what is rewritten.

--Ray
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when "No space left on device"...

2009-10-01 Thread Nicolas Williams
On Thu, Oct 01, 2009 at 11:03:06AM -0700, Rudolf Potucek wrote:
> Hmm ... I understand this is a bug, but only in the sense that the
> message is not sufficiently descriptive. Removing the file from the
> source filesystem will not necessarily free any space because the
> blocks have to be retained in the snapshots. The same problem exists
> for zeroing the file with >file as suggested earlier.
> 
> It seems like the appropriate solution would be to have a tool that
> allows removing a file from one or more snapshots at the same time as
> removing the source ... 

That would make them not really snapshots.  And such a tool would have
to "fix" clones too.

Snapshot and clones are great.  They are also great ways to consume too
much space.  One must do some spring cleaning once in a while.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when "No space left on device"...

2009-10-01 Thread Andrew Gabriel

Rudolf Potucek wrote:

Hmm ... I understand this is a bug, but only in the sense that the message is 
not sufficiently descriptive. Removing the file from the source filesystem will 
not necessarily free any space because the blocks have to be retained in the 
snapshots.


and if it's in a snapshot, it might need more blocks because you now 
need a copy of the parent directory with that file removed, whilst the 
snapshot parent directory version still has it in.



 The same problem exists for zeroing the file with >file as suggested earlier.
  


Pick a file which isn't in a snapshot (either because it's been created 
since the most recent snapshot, or because it's been rewritten since the 
most recent snapshot so it's no longer sharing blocks with the snapshot 
version).


It seems like the appropriate solution would be to have a tool that allows removing a file from one or more snapshots at the same time as removing the source ... 


  Rudolf
  


--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Ross
Ray, if you use -o it sets properties for the pool.  If you use -O (capital), 
it sets the filesystem properties for the default filesystem created with the 
pool.

zpool -O can use any valid zfs filesystem option.

But I agree, it's not very clearly documented.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Cindy Swearingen
You are correct. The zpool create -O option isn't available in a Solaris 
10 release but will be soon. This will allow you to set the file system

checksum property when the pool is created:

# zpool create -O checksum=sha256 pool c1t1d0
# zfs get checksum pool
NAME  PROPERTY  VALUE  SOURCE
pool  checksum  sha256 local

Otherwise, you would have to set it like this:

# zpool create pool c1t1d0
# zfs set checksum=sha256 pool
# zfs get checksum pool
NAME  PROPERTY  VALUE  SOURCE
pool  checksum  sha256 local

I'm not sure I understand the second part of your comments but will add:

If *you* rewrite your data then the new data will contain the new
checksum. I believe an upcoming project will provide the ability to
revise file system properties on the fly.


On 10/01/09 12:21, Ray Clark wrote:

U4 zpool does not appear to support the -o option...   Reading a current zpool 
manpage online lists the valid properties for the current zpool -o, and 
checksum is not one of them.  Are you mistaken or am I missing something?

Another thought is that *perhaps* all of the blocks that comprise an empty zpool are 
re-written sooner or later, and once the checksum is changed with "zfs set 
checksum=sha256 zfs01" (The pool name) they will be re-written with the new checksum 
very soon anyway.  Is this true?  This would require an understanding of the on-disk 
structure and when what is rewritten.

--Ray

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Can't rm file when "No space left on device"...

2009-10-01 Thread Chris Ridd


On 1 Oct 2009, at 19:34, Andrew Gabriel wrote:

Pick a file which isn't in a snapshot (either because it's been  
created since the most recent snapshot, or because it's been  
rewritten since the most recent snapshot so it's no longer sharing  
blocks with the snapshot version).


Out of curiosity, is there an easy way to find such a file?

Cheers,

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread David Stewart
Cindy:

I am not at the machine right now, but I installed from the OpenSolaris 2009.06 
LiveCD and have all of the updates installed.  I have solely been using "zfs 
list" to look at the size of the pools.

from a saved file on my laptop:

me...@opensolarisnas:~$ zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
mediapool3.58T   432G  29.9K  /mediapool

I destroyed the zpool and created another one, this time using "raidz" instead 
of "raidz1" in the zpool create command, and showed 0 used and 5.3T available.

I am happy to have the extra TB of space, but just wanted to make sure that I 
had performed the create correctly each time.  When I created a RAIDZ pool in 
VMWare Fusion and typed "raidz" instead of "raidz1" I came up with equal sized 
pools, but that was a virtual machine and only 2GB disks were used.

David
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best way to convert checksums

2009-10-01 Thread Richard Elling
Also, when a pool is created, there is only metadata which uses  
fletcher4[*].
So it is not a crime if you set the checksum after the pool is created  
and before

data is written :-)

* note: the uberblock uses SHA-256
 -- richard


On Oct 1, 2009, at 12:34 PM, Cindy Swearingen wrote:

You are correct. The zpool create -O option isn't available in a  
Solaris 10 release but will be soon. This will allow you to set the  
file system

checksum property when the pool is created:

# zpool create -O checksum=sha256 pool c1t1d0
# zfs get checksum pool
NAME  PROPERTY  VALUE  SOURCE
pool  checksum  sha256 local

Otherwise, you would have to set it like this:

# zpool create pool c1t1d0
# zfs set checksum=sha256 pool
# zfs get checksum pool
NAME  PROPERTY  VALUE  SOURCE
pool  checksum  sha256 local

I'm not sure I understand the second part of your comments but will  
add:


If *you* rewrite your data then the new data will contain the new
checksum. I believe an upcoming project will provide the ability to
revise file system properties on the fly.


On 10/01/09 12:21, Ray Clark wrote:
U4 zpool does not appear to support the -o option...   Reading a  
current zpool manpage online lists the valid properties for the  
current zpool -o, and checksum is not one of them.  Are you  
mistaken or am I missing something?
Another thought is that *perhaps* all of the blocks that comprise  
an empty zpool are re-written sooner or later, and once the  
checksum is changed with "zfs set checksum=sha256 zfs01" (The pool  
name) they will be re-written with the new checksum very soon  
anyway.  Is this true?  This would require an understanding of the  
on-disk structure and when what is rewritten.

--Ray

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] RAIDZ v. RAIDZ1

2009-10-01 Thread Cindy Swearingen

David,

When you get back to the original system, it would be helpful if
you could provide a side-by-side comparison of the zpool create
syntax and the zfs list output of both pools.

Thanks,

Cindy

On 10/01/09 13:48, David Stewart wrote:

Cindy:

I am not at the machine right now, but I installed from the OpenSolaris 2009.06 LiveCD 
and have all of the updates installed.  I have solely been using "zfs list" to 
look at the size of the pools.

from a saved file on my laptop:

me...@opensolarisnas:~$ zfs list
NAME  USED  AVAIL  REFER  MOUNTPOINT
mediapool3.58T   432G  29.9K  /mediapool

I destroyed the zpool and created another one, this time using "raidz" instead of 
"raidz1" in the zpool create command, and showed 0 used and 5.3T available.

I am happy to have the extra TB of space, but just wanted to make sure that I had performed the 
create correctly each time.  When I created a RAIDZ pool in VMWare Fusion and typed 
"raidz" instead of "raidz1" I came up with equal sized pools, but that was a 
virtual machine and only 2GB disks were used.

David

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help importing pool with "offline" disk

2009-10-01 Thread Carson Gaspar

Carson Gaspar wrote:

I'm booted back into snv118 (booting with the damaged pool disks 
disconnected so the host would come up without throwing up). After hot 
plugging the disks, I get:


bash-3.2# /usr/sbin/zdb -eud media
zdb: can't open media: File exists


OK, things are now different (possibly better?):

bash-3.2# /usr/sbin/zpool status media
  pool: media
 state: FAULTED
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
media   FAULTED  0 0 1  corrupted data
  raidz1DEGRADED 0 0 6
c7t5d0  UNAVAIL  0 0 0  cannot open
c7t2d0  ONLINE   0 0 0
c7t4d0  ONLINE   0 0 0
c7t3d0  ONLINE   0 0 0
c7t0d0  ONLINE   0 0 0
c7t7d0  ONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
c7t6d0  ONLINE   0 0 0

I suspect that an uberblock rollback might help me - googling all the 
references now, but if someone has any advice, I'd be grateful.


And I'm afraid I just did something foolish. zdb wasn't working, so I tried 
exporting the pool. Now I'm back to:


bash-3.2# /usr/sbin/zpool import
  pool: media
id: 4928877878517118807
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

media   UNAVAIL  insufficient replicas
  raidz1UNAVAIL  insufficient replicas
c7t5d0  UNAVAIL  cannot open
c7t2d0  ONLINE
c7t4d0  ONLINE
c7t3d0  ONLINE
c7t0d0  OFFLINE
c7t7d0  ONLINE
c7t1d0  ONLINE
c7t6d0  ONLINE

Can anyone help me get c7t0d0 "ONLINE" and roll back the uberblocks so I can 
import the pool and save my data?


--
Carson


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Help importing pool with "offline" disk

2009-10-01 Thread Carson Gaspar
Also can someone tell me if I'm too late for an uberblock rollback to help me? 
Diffing "zdb -l" output between c7t0 and c7t1 I see:


-txg=12968048
+txg=12968082

Is that too large a txg gap to roll back, or is it still possible?

Carson Gaspar wrote:

Carson Gaspar wrote:

I'm booted back into snv118 (booting with the damaged pool disks 
disconnected so the host would come up without throwing up). After 
hot plugging the disks, I get:


bash-3.2# /usr/sbin/zdb -eud media
zdb: can't open media: File exists


OK, things are now different (possibly better?):

bash-3.2# /usr/sbin/zpool status media
  pool: media
 state: FAULTED
status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
media   FAULTED  0 0 1  corrupted data
  raidz1DEGRADED 0 0 6
c7t5d0  UNAVAIL  0 0 0  cannot open
c7t2d0  ONLINE   0 0 0
c7t4d0  ONLINE   0 0 0
c7t3d0  ONLINE   0 0 0
c7t0d0  ONLINE   0 0 0
c7t7d0  ONLINE   0 0 0
c7t1d0  ONLINE   0 0 0
c7t6d0  ONLINE   0 0 0

I suspect that an uberblock rollback might help me - googling all the 
references now, but if someone has any advice, I'd be grateful.


And I'm afraid I just did something foolish. zdb wasn't working, so I 
tried exporting the pool. Now I'm back to:


bash-3.2# /usr/sbin/zpool import
  pool: media
id: 4928877878517118807
 state: UNAVAIL
status: One or more devices are missing from the system.
action: The pool cannot be imported. Attach the missing
devices and try again.
   see: http://www.sun.com/msg/ZFS-8000-3C
config:

media   UNAVAIL  insufficient replicas
  raidz1UNAVAIL  insufficient replicas
c7t5d0  UNAVAIL  cannot open
c7t2d0  ONLINE
c7t4d0  ONLINE
c7t3d0  ONLINE
c7t0d0  OFFLINE
c7t7d0  ONLINE
c7t1d0  ONLINE
c7t6d0  ONLINE

Can anyone help me get c7t0d0 "ONLINE" and roll back the uberblocks so I 
can import the pool and save my data?




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] mounting rootpool

2009-10-01 Thread Michael Schuster

On 01.10.09 08:25, camps support wrote:

I did zpool import -R /tmp/z rootpool

It only mounted /export and /rootpool only had /boot and /platform.

I need to be able to get /etc and /var?


zfs set mountpoint ...
zfs mount

--
Michael Schusterhttp://blogs.sun.com/recursion
Recursion, n.: see 'Recursion'
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] cachefile for snail zpool import mystery?

2009-10-01 Thread Max Holm
Hi,

We are seeing more long delays in zpool import, say, 4~5 or even
25~30 minutes, especially when backup jobs are going on in the FC SAN 
the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same 
array, 
some pools takes a few seconds, but minutes for some. the pattern
seems random to me so far.  It's first noticed soon after being upgraded to 
Solaris 10 U6 (10/08, on sparc, M4000,Vx90 using some IBM and Sun arrays.) 
Appreciated, if someone can comment on this. Thanks.

We have a few VCS clusters, each has a set of service groups that 
import/export some zpools at proper events on a proper node 
(with '-R /' option). To fix the long delays, it seems I can use
the 'zpool set cachefile=/x/... ...' for each pool, deploy
all cachefiles to every cluster node of a cluster on a persisent 
location,/y/, then have the agent online script do 
'zpool import -c /y/...', if /y/... exists. Any better fix?

1. Why would it ever take so long (20-30 minutes!) to import a pool?
It seems I/O on the FC SAN were just fine, no error messages either. 
Is it problems of other stacks or because I deleted some LUNs on the array
without taking it off device trees? 

2.  we now have the burden of maintaining these cachefiles when
we change the zpool, say add/drop a lun. any advice?
It'd be nice if zfs keeps a cache file (other than /etc/zfs/zpool.cache)
for those ones imported under an altroot, and make it persistent,
verify/update entries at proper events. At least, I wish zfs allow
us to create the cachefiles while they are not currently imported.
so that I can just have a simple daily job to maintain the cache files 
on every node of a cluster automatically. 
 
Thanks.
Max
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS caching of compressed data

2009-10-01 Thread Stuart Anderson
I am wondering if the following idea makes any sense as a way to get  
ZFS to cache compressed data in DRAM?


In particular, given a 2-way zvol mirror of highly compressible data  
on persistent storage devices, what would go wrong if I dynamically  
added a ramdisk as a 3rd mirror device at boot time?


Would ZFS route most (or all) of the reads to the lower latency DRAM  
device?


In the case of an un-clean shutdown where there was no opportunity to  
actively remove the ramdisk from the pool before shutdown would there  
be any problem at boot time when the ramdisk is still registered but  
unavailable?


Note, this Gedanken experiment is for highly compressible (~9x)  
metadata for a non-ZFS filesystem.


Thanks.


--
Stuart Anderson  ander...@ligo.caltech.edu
http://www.ligo.caltech.edu/~anderson



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss