Re[2]: [zfs-discuss] zpool status and CKSUM errors

2006-06-12 Thread Robert Milkowski
Hello Eric, Friday, June 9, 2006, 5:16:29 PM, you wrote: ES On Fri, Jun 09, 2006 at 06:16:53AM -0700, Robert Milkowski wrote: bash-3.00# zpool status -v nfs-s5-p1 pool: nfs-s5-p1 state: ONLINE status: One or more devices has experienced an unrecoverable error. An attempt was

Re[2]: [zfs-discuss] zpool status and CKSUM errors

2006-06-12 Thread Robert Milkowski
Hello Jeff, Saturday, June 10, 2006, 2:32:49 AM, you wrote: btw: I'm really suprised how SATA disks are unreliable. I put dozen TBs of data on ZFS last time and just after few days I got few hundreds checksum error (there raid-z was used). And these disks are 500GB in 3511 array. Well that

[zfs-discuss] zfs destroy - destroying a snapshot

2006-06-12 Thread Robert Milkowski
Hello zfs-discuss, I'm writing a script to do automatically snapshots and destroy old one. I think it would be great to add to zfs destroy another option so only snapshots can be destroyed. Something like: zfs destroy -s SNAPSHOT so if something other than snapshot is provided

[zfs-discuss] ZFS hang

2006-06-12 Thread Robert Milkowski
Hi. snv_39, SPARC - nfs server with local ZFS filesystems. Under heavy load traffic to all filesystems in one pool ceased - it was ok for other pools. By ceased I mean that 'zpool iostat 1' showed no traffic to that pool (nfs-s5-p0). Commands like 'df' or 'zfs list' hang. I issued 'reboot

[zfs-discuss] ?: zfs mv within pool seems slow

2006-06-12 Thread Steffen Weiberle
I have just upgraded my jumpstart server to S10 u2 b9a. It is an Ultra 10 with two 120GB EIDE drives. The second drive (disk1) is new, and has u2b9a installed on a slice, with most of the space in slice 7 for the ZFS pool I created pool1 on disk1, and created the filesystem pool1/ro (for

Re: [zfs-discuss] ZFS hang

2006-06-12 Thread James C. McPherson
Robert Milkowski wrote: snv_39, SPARC - nfs server with local ZFS filesystems. Under heavy load traffic to all filesystems in one pool ceased - it was ok for other pools. By ceased I mean that 'zpool iostat 1' showed no traffic to that pool (nfs-s5-p0). Commands like 'df' or 'zfs list'

Re: [zfs-discuss] ?: zfs mv within pool seems slow

2006-06-12 Thread Darren J Moffat
Steffen Weiberle wrote: I created as second filesystem pool1/jumpstart, and deviced to mv pool1/ro/jumpstart/* to pool1/jumpstart. All the data is staying in the same pool. No data is actually getting changed, it is just being relocated. If this were a UFS filesystem, the move would be done

Re: [zfs-discuss] ?: zfs mv within pool seems slow

2006-06-12 Thread Steffen Weiberle
Darren J Moffat wrote On 06/12/06 09:09,: Steffen Weiberle wrote: I created as second filesystem pool1/jumpstart, and deviced to mv pool1/ro/jumpstart/* to pool1/jumpstart. All the data is staying in the same pool. No data is actually getting changed, it is just being relocated. If this were

Re: [zfs-discuss] zpool status and CKSUM errors

2006-06-12 Thread Eric Schrock
On Mon, Jun 12, 2006 at 10:49:49AM +0200, Robert Milkowski wrote: Well, I just did 'fmdump -eV' and last entry is from May 31th and is related to pools which are already destroyed. I can see another 1 checksum error in that pool (I did zpool clear last time) and it's NOT reported by

[zfs-discuss] Re: ZFS + Raid-Z pool size incorrect?

2006-06-12 Thread Nathanael Burton
Thanks for the help! So I BFU'd to the following: bash-3.00# uname -a SunOS mathrock-opensolaris 5.11 opensol-20060605 i86pc i386 i86pc I blew away all my old ZFS pools and created a new raidz pool with my three disks. The file system now correctly reports the right size, and df/du report