Re: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-24 Thread Erik Trimble
The ability to expand (and, to a less extent, shrink) a RAIDZ or RAIDZ2 device is actually one of the more critical missing features from ZFS, IMHO. It is very common for folks to add additional shelf or shelves into an existing array setup, and if you have created a pool which uses RAIDZ

[zfs-discuss] Oracle 11g Performace

2006-10-24 Thread Mika Borner
Here's an interesting read about forthcoming Oracle 11g file system performance. Sadly, there is now information about how this works. It will be interesting to compare it with ZFS Performance, as soon as ZFS is tuned for Databases. Speed and performance will be the hallmark of the 11g,

Re: [zfs-discuss] Mirrored Raidz

2006-10-24 Thread Roch
Michel Kintz writes: Matthew Ahrens a écrit : Richard Elling - PAE wrote: Anthony Miller wrote: Hi, I've search the forums and not found any answer to the following. I have 2 JBOD arrays each with 4 disks. I want to create create a raidz on one array and have

Re: [zfs-discuss] Mirrored Raidz

2006-10-24 Thread Dale Ghent
On Oct 24, 2006, at 4:56 AM, Michel Kintz wrote: It is not always a matter of more redundancy. In my customer's case, they have storage in 2 different rooms of their datacenter and want to mirror from one storage unit in one room to the other. So having in this case a combination of RAID-Z

Re: [zfs-discuss] Mirrored Raidz

2006-10-24 Thread Jonathan Edwards
On Oct 24, 2006, at 04:19, Roch wrote: Michel Kintz writes: Matthew Ahrens a écrit : Richard Elling - PAE wrote: Anthony Miller wrote: Hi, I've search the forums and not found any answer to the following. I have 2 JBOD arrays each with 4 disks. I want to create create a raidz on one

Re[4]: [zfs-discuss] What is touching my filesystems?

2006-10-24 Thread Robert Milkowski
Hello Noël, Tuesday, October 24, 2006, 1:37:20 AM, you wrote: ND Hey Robert, ND No, all the code fixes and features I mentioned before I developed and ND putback before I left Sun, so no active development is happening or ND anything. I still like to hang out on the zfs alias though just

[zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Anton B. Rang
Our thinking is that if you want more redundancy than RAID-Z, you should use RAID-Z with double parity, which provides more reliability and more usable storage than a mirror of RAID-Zs would. This is only true if the drives have either independent or identical failure modes, I think. Consider

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Frank Cusack
On October 24, 2006 9:19:07 AM -0700 Anton B. Rang [EMAIL PROTECTED] wrote: Our thinking is that if you want more redundancy than RAID-Z, you should use RAID-Z with double parity, which provides more reliability and more usable storage than a mirror of RAID-Zs would. This is only true if the

[zfs-discuss] chmod A=.... on ZFS != chmod A=... on UFS

2006-10-24 Thread Chris Gerhard
I'm trying to create a directory hierarchy that when ever a file is created it is created mode 664 with directories 775. Now I can do this with chmod to create the ACL on UFS and it behaves as expected howerver on ZFS it does not. : pearson TS 68 $; mkdir ~/tmp/acl : pearson TS 69 $; df -h

Re: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-24 Thread Erik Trimble
Matthew Ahrens wrote: Erik Trimble wrote: The ability to expand (and, to a less extent, shrink) a RAIDZ or RAIDZ2 device is actually one of the more critical missing features from ZFS, IMHO. It is very common for folks to add additional shelf or shelves into an existing array setup, and if

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Dale Ghent
On Oct 24, 2006, at 12:33 PM, Frank Cusack wrote: On October 24, 2006 9:19:07 AM -0700 Anton B. Rang [EMAIL PROTECTED] wrote: Our thinking is that if you want more redundancy than RAID-Z, you should use RAID-Z with double parity, which provides more reliability and more usable storage

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Frank Cusack
On October 24, 2006 2:26:49 PM -0400 Dale Ghent [EMAIL PROTECTED] wrote: Since the person is dealing with JBODS and not hardware RAID arrays, my suggestion is to combine ZFS and SVM. 1) Use ZFS and make a raidz-based ZVOL of disks on each of the two JBODs 2) Use SVM to mirror the two ZVOLs.

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Richard Elling - PAE
Pedantic question, what would this gain us other than better data retention? Space and (especially?) performance would be worse with RAID-Z+1 than 2-way mirrors. -- richard Frank Cusack wrote: On October 24, 2006 9:19:07 AM -0700 Anton B. Rang [EMAIL PROTECTED] wrote: Our thinking is that if

Re: [zfs-discuss] chmod A=.... on ZFS != chmod A=... on UFS

2006-10-24 Thread Mark Shellenbaum
Chris Gerhard wrote: I'm trying to create a directory hierarchy that when ever a file is created it is created mode 664 with directories 775. Now I can do this with chmod to create the ACL on UFS and it behaves as expected howerver on ZFS it does not. So what exactly are you trying to

[zfs-discuss] zfs set sharenfs=on

2006-10-24 Thread Dick Davies
I started sharing out zfs filesystems via NFS last week using sharenfs=on. That seems to work fine until I reboot. Turned out the NFS server wasn't enabled - I had to enable nfs/server, nfs/lockmgr and nfs/status manually. This is a stock SXCR b49 (ZFS root) install - don't think I'd changed

[zfs-discuss] determining raidz pool configuration

2006-10-24 Thread Matt Ingenthron
Hi all, Sorry for the newbie question, but I've looked at the docs and haven't been able to find an answer for this. I'm working with a system where the pool has already been configured and want to determine what the configuration is. I had thought that'd be with zpool status -v poolname,

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Dale Ghent
On Oct 24, 2006, at 2:46 PM, Richard Elling - PAE wrote: Pedantic question, what would this gain us other than better data retention? Space and (especially?) performance would be worse with RAID-Z+1 than 2-way mirrors. You answered your own question, it would gain the user better data

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Jonathan Edwards
there's 2 approaches: 1) RAID 1+Z where you mirror the individual drives across trays and then RAID-Z the whole thing 2) RAID Z+1 where you RAIDZ each tray and then mirror them I would argue that you can lose the most drives in configuration 1 and stay alive: With a simple mirrored

Re: [zfs-discuss] zfs set sharenfs=on

2006-10-24 Thread Eric Schrock
On Tue, Oct 24, 2006 at 08:01:21PM +0100, Dick Davies wrote: I started sharing out zfs filesystems via NFS last week using sharenfs=on. That seems to work fine until I reboot. Turned out the NFS server wasn't enabled - I had to enable nfs/server, nfs/lockmgr and nfs/status manually. This is a

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Frank Cusack
On October 24, 2006 3:15:10 PM -0400 Dale Ghent [EMAIL PROTECTED] wrote: On Oct 24, 2006, at 2:46 PM, Richard Elling - PAE wrote: Pedantic question, what would this gain us other than better data retention? Space and (especially?) performance would be worse with RAID-Z+1 than 2-way mirrors.

Re: [zfs-discuss] determining raidz pool configuration

2006-10-24 Thread Eric Schrock
Matt - The 'zpool status -v' output is guaranteed to be exactly the same as what ZFS sees. The only exception to this is if you run the command as a non-root user, and the device paths have changed, then the path names may be incorrect. Running it once as root will correctly update the paths.

Re: [zfs-discuss] determining raidz pool configuration

2006-10-24 Thread Roch Bourbonnais
I've discussed this with some guys I know, and we decided that your admin must have given you an incorrect description. BTW, that config falls outside of best practice; The current thinking is to use raid-z group of not much more than 10 disks. You may stripe multiple such groups into a

[zfs-discuss] Disregard: determining raidz pool configuration

2006-10-24 Thread Matt Ingenthron
After some quick experimenting, I determined that it is in fact a single raidz pool with all 47 devices. Apparently something was either done wrong or miscommunicated in the process. Sorry for the bandwidth. - Matt Matt Ingenthron wrote: Hi all, Sorry for the newbie question, but I've

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Dale Ghent
On Oct 24, 2006, at 3:23 PM, Frank Cusack wrote: http://blogs.sun.com/roch/entry/when_to_and_not_to says a raid-z vdev has the read throughput of 1 drive for random reads. Compared to #drives for a stripe. That's pretty significant. Okay, then if the person can stand to lose even more

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Frank Cusack
On October 24, 2006 3:31:41 PM -0400 Dale Ghent [EMAIL PROTECTED] wrote: On Oct 24, 2006, at 3:23 PM, Frank Cusack wrote: http://blogs.sun.com/roch/entry/when_to_and_not_to says a raid-z vdev has the read throughput of 1 drive for random reads. Compared to #drives for a stripe. That's pretty

[zfs-discuss] Re: Snapshots impact on performance

2006-10-24 Thread Robert Milkowski
Hi. On nfs clients which are mounting file system f3-1/d611 I can see 3-5s periods of 100% busy (iostat) and almost no IOs issued to nfs server, on nfs server at the same time disk activity is almost 0 (both iostat and zpool iostat). However CPU activity increases in SYS during that

Re[2]: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-24 Thread Robert Milkowski
Hello Matthew, Tuesday, October 24, 2006, 3:36:13 AM, you wrote: MA FYI, we're working on being able to shrink pools with no restrictions. MA Unfortunately I don't have an ETA for you on this, though. That's great! MA And as I'm sure you know, you can always grow pools :-) Well, partially -

Re: [zfs-discuss] Changing number of disks in a RAID-Z?

2006-10-24 Thread Matthew Ahrens
Erik Trimble wrote: Matthew Ahrens wrote: Erik Trimble wrote: The ability to expand (and, to a less extent, shrink) a RAIDZ or RAIDZ2 device is actually one of the more critical missing features from ZFS, IMHO. It is very common for folks to add additional shelf or shelves into an existing

Re: [zfs-discuss] Re: Mirrored Raidz

2006-10-24 Thread Torrey McMahon
Frank Cusack wrote: I don't think we know what the OP wanted. :-) I understand the paranoia around overlapping raid levels - And yes they are out to get you - but in the past some of the requirements were around performance in a failure mode. Do we have any data concerning the

[zfs-discuss] Re: Snapshots impact on performance

2006-10-24 Thread Robert Milkowski
forgot to mention - after quota was lowered still no problem - everything works ok. This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] chmod A=.... on ZFS != chmod A=... on UFS

2006-10-24 Thread Chris Gerhard
Mark Shellenbaum wrote: Chris Gerhard wrote: I'm trying to create a directory hierarchy that when ever a file is created it is created mode 664 with directories 775. Now I can do this with chmod to create the ACL on UFS and it behaves as expected howerver on ZFS it does not. So what

Re: [zfs-discuss] chmod A=.... on ZFS != chmod A=... on UFS

2006-10-24 Thread Mark Shellenbaum
Chris Gerhard wrote: Mark Shellenbaum wrote: Chris Gerhard wrote: I'm trying to create a directory hierarchy that when ever a file is created it is created mode 664 with directories 775. Now I can do this with chmod to create the ACL on UFS and it behaves as expected howerver on ZFS it

Re: [zfs-discuss] zfs set sharenfs=on

2006-10-24 Thread Dick Davies
On 24/10/06, Eric Schrock [EMAIL PROTECTED] wrote: On Tue, Oct 24, 2006 at 08:01:21PM +0100, Dick Davies wrote: Shouldn't a ZFS share be permanently enabling NFS? # svcprop -p application/auto_enable nfs/server true This property indicates that regardless of the current

[zfs-discuss] zpool snapshot fails on unmounted filesystem

2006-10-24 Thread Thomas Maier-Komor
Is this a known problem/bug? $ zfs snapshot zpool/[EMAIL PROTECTED] internal error: unexpected error 16 at line 2302 of ../common/libzfs_dataset.c this occured on: $ uname -a SunOS azalin 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-Blade-2500 This message posted from opensolaris.org

[zfs-discuss] Oracle raw volumes

2006-10-24 Thread Sergio Valverde
Hi. If I create an Oracle volume using zfs like this # zpool create -f oracle c0t1d0 # zfs create -V 500mb oracle/system.dbf # cd /dev/zvol/rdsk/oracle # chown oracle:oinstall system.dbf Would it be similar to a vxvm raw volume like /dev/vx/rdsk/oracle/system.dbf

Re: [zfs-discuss] zpool snapshot fails on unmounted filesystem

2006-10-24 Thread Frank Cusack
On October 24, 2006 2:58:58 PM -0700 Thomas Maier-Komor [EMAIL PROTECTED] wrote: Is this a known problem/bug? $ zfs snapshot zpool/[EMAIL PROTECTED] internal error: unexpected error 16 at line 2302 of ../common/libzfs_dataset.c I had this problem also. I think the answer was to unmount the

[zfs-discuss] Panic while scrubbing

2006-10-24 Thread Siegfried Nikolaivich
Hello, I am not sure if I am posting in the correct forum, but it seems somewhat zfs related, so I thought I'd share it. While the machine was idle, I started a scrub. Around the time the scrubbing was supposed to be finished, the machine panicked. This might be related to the 'metadata

Re: [zfs-discuss] Panic while scrubbing

2006-10-24 Thread James McPherson
On 10/25/06, Siegfried Nikolaivich [EMAIL PROTECTED] wrote: ... While the machine was idle, I started a scrub. Around the time the scrubbing was supposed to be finished, the machine panicked. This might be related to the 'metadata corruption' that happened earlier to me. Here is the log, any

Re: [zfs-discuss] Panic while scrubbing

2006-10-24 Thread Siegfried Nikolaivich
On 24-Oct-06, at 9:11 PM, James McPherson wrote: this error from the marvell88sx driver is of concern, The 10b8b decode and disparity error messages make me think that you have a bad piece of hardware. I hope it's not your controller but I can't tell without more data. You should have a

Re: [zfs-discuss] Re: Snapshots impact on performance

2006-10-24 Thread Matthew Ahrens
Robert Milkowski wrote: Hi. On nfs clients which are mounting file system f3-1/d611 I can see 3-5s periods of 100% busy (iostat) and almost no IOs issued to nfs server, on nfs server at the same time disk activity is almost 0 (both iostat and zpool iostat). However CPU activity increases

Re: [zfs-discuss] Panic while scrubbing

2006-10-24 Thread Siegfried Nikolaivich
On 24-Oct-06, at 9:47 PM, James McPherson wrote: On 10/25/06, Siegfried Nikolaivich [EMAIL PROTECTED] wrote: And this is shown on the rest of the ports: c0t?d0 Soft Errors: 6 Hard Errors: 0 Transport Errors: 0 Vendor: ATA Product: ST3320620AS Revision: CSerial No: