Re: [zfs-discuss] 6410 expansion shelf

2007-04-03 Thread Wee Yeh Tan
On 4/3/07, Frank Cusack <[EMAIL PROTECTED]> wrote: > As promised. I got my 6140 SATA delivered yesterday and I hooked it > up to a T2000 on S10u3. The T2000 saw the disks straight away and is > "working" for the last 1 hour. I'll be running some benchmarks on it. > I'll probably have a week w

Re: [zfs-discuss] ZFS panics with dmu_buf_hold_array

2007-04-03 Thread Matthew Ahrens
Bertrand Sirodot wrote: I am trying to backup the pool, but when I tar some of the filesystems, the kernel panics with the following message: This error is occurring because a critical piece of metadata can't be read while we are trying to write out changes. Try ensuring that you aren't mak

Re: [zfs-discuss] Re: today panic ...

2007-04-03 Thread Ernie Dipko
Gino, I just had a similar experience and was able to import the pool when I added the readonly option (zpool import -f -o ro ) Ernie Gino Ruopolo wrote: Hi Matt, trying to import our corrupted zpool with snv_60 and 'set zfs:zfs_recover=1' in /etc/system give us: Apr 3 20:35:56 SER

[zfs-discuss] ZFS panics with dmu_buf_hold_array

2007-04-03 Thread Bertrand Sirodot
Hi, I have been wrestling with ZFS issues since yesterday when one of my disks sort of died. After much wrestling with "zpool replace" I managed to get the new disk in and got the pool to resilver, but since then I have one error left that I can't clear: pool: data state: ONLINE status: One

Re[2]: [zfs-discuss] Size taken by a zfs symlink

2007-04-03 Thread Robert Milkowski
Hello Neil, Tuesday, April 3, 2007, 2:43:55 PM, you wrote: NP> Hi Robert, NP> Robert Milkowski wrote On 04/02/07 17:48,: >> Right now a symlink should consume one dnode (320 bytes) NP> dnode_phys_t are actually 512 bytes: Yep, right - I mistaken it with bonus buffer size which is 320B. >> ::

[zfs-discuss] Re: today panic ...

2007-04-03 Thread Gino Ruopolo
Hi Matt, trying to import our corrupted zpool with snv_60 and 'set zfs:zfs_recover=1' in /etc/system give us: Apr 3 20:35:56 SERVER141 ^Mpanic[cpu3]/thread=fffec3860f20: Apr 3 20:35:56 SERVER141 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x67b800 <= 0x67

Re: [zfs-discuss] Zones on large ZFS filesystems

2007-04-03 Thread Niclas Sodergard
On 4/3/07, Matthew Ahrens <[EMAIL PROTECTED]> wrote: You can work around this by setting the quota on an ancestor of the to-be-created clone. Also, implementing RFE 6364688 "method to preserve properties when making a clone" would make workaround #1 (set a quota on the first fs) work for the cl

Re: [zfs-discuss] zfs boot/root bits in bfu?

2007-04-03 Thread Lori Alt
I assume this is the case. These changes will get rolled up with all the others. Lori oliver soell wrote: So I can expect that the zfs root bits will be in the weekly ON Consolidation bfu archives (http://dlc.sun.com/osol/on/downloads/current/) tomorrow or so? I've been a solaris admin for

Re: [zfs-discuss] Best way to migrate filesystems to ZFS?

2007-04-03 Thread Mark Shellenbaum
Robert Thurlow wrote: Richard Elling wrote: Peter Eriksson wrote: ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work, same with rsync. ufsrestore obviously won't work on ZFS. ufsrestore works fine; it only reads from a 'ufsdump' format medium and writes through generic files

Re: [zfs-discuss] Best way to migrate filesystems to ZFS?

2007-04-03 Thread Robert Thurlow
Richard Elling wrote: Peter Eriksson wrote: ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work, same with rsync. ufsrestore obviously won't work on ZFS. ufsrestore works fine; it only reads from a 'ufsdump' format medium and writes through generic filesystem APIs. I did some

Re: [zfs-discuss] Best way to migrate filesystems to ZFS?

2007-04-03 Thread Darren Dunham
> > I currently use Solaris tar like this: > >cd $DIR && tar [EMAIL PROTECTED] - . | rsh $HOST "cd $NEWDIR && tar > > [EMAIL PROTECTED] -" > > seems simple enough :-) > > > ufsdump/ufsrestore doesn't restore the ACLs so that doesn't work, > > same with rsync. > > ufsrestore obviously won't w

Re: [zfs-discuss] Best way to migrate filesystems to ZFS?

2007-04-03 Thread Richard Elling
Peter Eriksson wrote: I'm about to start migrating a lot of files on UFS filesystems from a Solaris 9 server to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now... What's the "best" way to move all these files? Should one use Solaris tar, Solaris cpio, ufsdump/ufsrestore, rsync

[zfs-discuss] Best way to migrate filesystems to ZFS?

2007-04-03 Thread Peter Eriksson
I'm about to start migrating a lot of files on UFS filesystems from a Solaris 9 server to a new server running Solaris 10 (u3) with ZFS (a Thumper). Now... What's the "best" way to move all these files? Should one use Solaris tar, Solaris cpio, ufsdump/ufsrestore, rsync or what? I currently us

Re: [zfs-discuss] zfs snapshot issues.

2007-04-03 Thread Matthew Ahrens
Joseph Barbey wrote: Also, all 3 pools are still 'formatted' as v2. I'll try upgrading all 3 before Sunday, and see if that helps as well. That won't change any performance; upgrading to v3 just enables new features (hot spares and double parity raidz). --matt __

Re: [zfs-discuss] zfs snapshot issues.

2007-04-03 Thread Joseph Barbey
Matthew Ahrens wrote: Joseph Barbey wrote: Robert Milkowski wrote: JB> So, normally, when the script runs, all snapshots finish in maybe a minute JB> total. However, on Sundays, it continues to take longer and longer. On JB> 2/25 it took 30 minutes, and this last Sunday, it took 2:11. The

Re: [zfs-discuss] Convert raidz

2007-04-03 Thread Tim Foster
On Tue, 2007-04-03 at 10:54 -0400, Luke Scharf wrote: > Tim Foster wrote: > > You can add a disk to a raidz configuration, but then that makes a pool > > containing 1 raidz + 1 additional disk in a dynamic stripe configuration > > (which ZFS will warn you about, since you have different fault tole

Re: [zfs-discuss] Convert raidz

2007-04-03 Thread Luke Scharf
Tim Foster wrote: And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. You can add a disk to a raidz configuration, but then that makes a pool containing 1 raidz + 1 additional disk in a dynamic stripe configuration (which ZFS will w

Re: [zfs-discuss] Re: delete acl not working on zfs.v3?

2007-04-03 Thread Mark Shellenbaum
Carson Gaspar wrote: Mark Shellenbaum wrote: Can you post the full ACL on the directory and on the file you are being allowed to delete. Simple test: carson:gandalf 2 $ uname -a SunOS gandalf.taltos.org 5.10 Generic_125101-02 i86pc i386 i86pc carson:gandalf 0 $ mkdir foo carson:gandalf 0 $

Re: [zfs-discuss] Re: delete acl not working on zfs.v3?

2007-04-03 Thread Carson Gaspar
Mark Shellenbaum wrote: Can you post the full ACL on the directory and on the file you are being allowed to delete. Simple test: carson:gandalf 2 $ uname -a SunOS gandalf.taltos.org 5.10 Generic_125101-02 i86pc i386 i86pc carson:gandalf 0 $ mkdir foo carson:gandalf 0 $ ls -dv foo drwxr-xr-x

Re: [zfs-discuss] Re: delete acl not working on zfs.v3?

2007-04-03 Thread Mark Shellenbaum
Carson Gaspar wrote: we give the right to add folder to user foo.(this user can not delete anything as a default) After that we give the right create file.And then user foo gains delete everthing. How come is it possible. Even though we add another rule like "0:user:foo:delete_child/delete:deny".

Re: [zfs-discuss] Re: delete acl not working on zfs.v3?

2007-04-03 Thread Mark Shellenbaum
Carson Gaspar wrote: we give the right to add folder to user foo.(this user can not delete anything as a default) After that we give the right create file.And then user foo gains delete everthing. How come is it possible. Even though we add another rule like "0:user:foo:delete_child/delete:deny".

Re: [zfs-discuss] ZFS overhead killed my ZVOL

2007-04-03 Thread Brian H. Nelson
Can anyone comment? -Brian Brian H. Nelson wrote: Adam Leventhal wrote: On Tue, Mar 20, 2007 at 06:01:28PM -0400, Brian H. Nelson wrote: Why does this happen? Is it a bug? I know there is a recommendation of 20% free space for good performance, but that thought never occurred to me when

[zfs-discuss] Re: delete acl not working on zfs.v3?

2007-04-03 Thread Carson Gaspar
> we give the right to add folder to user foo.(this > user can not delete anything as a default) After that > we give the right create file.And then user foo gains > delete everthing. How come is it possible. > Even though we add another rule like > "0:user:foo:delete_child/delete:deny". Again it d

Re: [zfs-discuss] Size taken by a zfs symlink

2007-04-03 Thread Neil Perrin
Hi Robert, Robert Milkowski wrote On 04/02/07 17:48,: Right now a symlink should consume one dnode (320 bytes) dnode_phys_t are actually 512 bytes: > ::sizeof dnode_phys_t sizeof (dnode_phys_t) = 0x200 > if the name it point to is less than 67 bytes, otherwise a data block is allocated add