The ability to expand (and, to a less extent, shrink) a RAIDZ or RAIDZ2
device is actually one of the more critical missing features from ZFS,
IMHO. It is very common for folks to add additional shelf or shelves
into an existing array setup, and if you have created a pool which uses
RAIDZ
Here's an interesting read about forthcoming Oracle 11g file system
performance. Sadly, there is now information about how this works.
It will be interesting to compare it with ZFS Performance, as soon as ZFS is
tuned for Databases.
Speed and performance will be the hallmark of the 11g,
Michel Kintz writes:
Matthew Ahrens a écrit :
Richard Elling - PAE wrote:
Anthony Miller wrote:
Hi,
I've search the forums and not found any answer to the following.
I have 2 JBOD arrays each with 4 disks.
I want to create create a raidz on one array and have
On Oct 24, 2006, at 4:56 AM, Michel Kintz wrote:
It is not always a matter of more redundancy.
In my customer's case, they have storage in 2 different rooms of
their datacenter and want to mirror from one storage unit in one
room to the other.
So having in this case a combination of RAID-Z
On Oct 24, 2006, at 04:19, Roch wrote:
Michel Kintz writes:
Matthew Ahrens a écrit :
Richard Elling - PAE wrote:
Anthony Miller wrote:
Hi,
I've search the forums and not found any answer to the following.
I have 2 JBOD arrays each with 4 disks.
I want to create create a raidz on one
Hello Noël,
Tuesday, October 24, 2006, 1:37:20 AM, you wrote:
ND Hey Robert,
ND No, all the code fixes and features I mentioned before I developed and
ND putback before I left Sun, so no active development is happening or
ND anything. I still like to hang out on the zfs alias though just
Our thinking is that if you want more redundancy than RAID-Z, you should
use RAID-Z with double parity, which provides more reliability and more
usable storage than a mirror of RAID-Zs would.
This is only true if the drives have either independent or identical failure
modes, I think. Consider
On October 24, 2006 9:19:07 AM -0700 Anton B. Rang [EMAIL PROTECTED]
wrote:
Our thinking is that if you want more redundancy than RAID-Z, you should
use RAID-Z with double parity, which provides more reliability and more
usable storage than a mirror of RAID-Zs would.
This is only true if the
I'm trying to create a directory hierarchy that when ever a file is created it
is created mode 664 with directories 775.
Now I can do this with chmod to create the ACL on UFS and it behaves as
expected howerver on ZFS it does not.
: pearson TS 68 $; mkdir ~/tmp/acl
: pearson TS 69 $; df -h
Matthew Ahrens wrote:
Erik Trimble wrote:
The ability to expand (and, to a less extent, shrink) a RAIDZ or
RAIDZ2 device is actually one of the more critical missing features
from ZFS, IMHO. It is very common for folks to add additional shelf
or shelves into an existing array setup, and if
On Oct 24, 2006, at 12:33 PM, Frank Cusack wrote:
On October 24, 2006 9:19:07 AM -0700 Anton B. Rang
[EMAIL PROTECTED] wrote:
Our thinking is that if you want more redundancy than RAID-Z, you
should
use RAID-Z with double parity, which provides more reliability
and more
usable storage
On October 24, 2006 2:26:49 PM -0400 Dale Ghent [EMAIL PROTECTED] wrote:
Since the person is dealing with JBODS and not hardware RAID arrays, my
suggestion is to combine ZFS and SVM.
1) Use ZFS and make a raidz-based ZVOL of disks on each of the two JBODs
2) Use SVM to mirror the two ZVOLs.
Pedantic question, what would this gain us other than better data
retention?
Space and (especially?) performance would be worse with RAID-Z+1
than 2-way mirrors.
-- richard
Frank Cusack wrote:
On October 24, 2006 9:19:07 AM -0700 Anton B. Rang
[EMAIL PROTECTED] wrote:
Our thinking is that if
Chris Gerhard wrote:
I'm trying to create a directory hierarchy that when ever a file is created it
is created mode 664 with directories 775.
Now I can do this with chmod to create the ACL on UFS and it behaves as
expected howerver on ZFS it does not.
So what exactly are you trying to
I started sharing out zfs filesystems via NFS last week using
sharenfs=on. That seems to work fine until I reboot. Turned
out the NFS server wasn't enabled - I had to enable
nfs/server, nfs/lockmgr and nfs/status manually. This is a stock
SXCR b49 (ZFS root) install - don't think I'd changed
Hi all,
Sorry for the newbie question, but I've looked at the docs and haven't
been able to find an answer for this.
I'm working with a system where the pool has already been configured and
want to determine what the configuration is. I had thought that'd be
with zpool status -v poolname,
On Oct 24, 2006, at 2:46 PM, Richard Elling - PAE wrote:
Pedantic question, what would this gain us other than better data
retention?
Space and (especially?) performance would be worse with RAID-Z+1
than 2-way mirrors.
You answered your own question, it would gain the user better data
there's 2 approaches:
1) RAID 1+Z where you mirror the individual drives across trays and
then RAID-Z the whole thing
2) RAID Z+1 where you RAIDZ each tray and then mirror them
I would argue that you can lose the most drives in configuration 1
and stay alive:
With a simple mirrored
On Tue, Oct 24, 2006 at 08:01:21PM +0100, Dick Davies wrote:
I started sharing out zfs filesystems via NFS last week using
sharenfs=on. That seems to work fine until I reboot. Turned
out the NFS server wasn't enabled - I had to enable
nfs/server, nfs/lockmgr and nfs/status manually. This is a
On October 24, 2006 3:15:10 PM -0400 Dale Ghent [EMAIL PROTECTED] wrote:
On Oct 24, 2006, at 2:46 PM, Richard Elling - PAE wrote:
Pedantic question, what would this gain us other than better data
retention?
Space and (especially?) performance would be worse with RAID-Z+1
than 2-way mirrors.
Matt -
The 'zpool status -v' output is guaranteed to be exactly the same as
what ZFS sees.
The only exception to this is if you run the command as a non-root user,
and the device paths have changed, then the path names may be incorrect.
Running it once as root will correctly update the paths.
I've discussed this with some guys I know, and we decided that your
admin must have given you an incorrect description.
BTW, that config falls outside of best practice; The current thinking
is to use raid-z group of not much more than 10 disks.
You may stripe multiple such groups into a
After some quick experimenting, I determined that it is in fact a single
raidz pool with all 47 devices. Apparently something was either done
wrong or miscommunicated in the process.
Sorry for the bandwidth.
- Matt
Matt Ingenthron wrote:
Hi all,
Sorry for the newbie question, but I've
On Oct 24, 2006, at 3:23 PM, Frank Cusack wrote:
http://blogs.sun.com/roch/entry/when_to_and_not_to says a raid-z
vdev has the read throughput of 1 drive for random reads. Compared
to #drives for a stripe. That's pretty significant.
Okay, then if the person can stand to lose even more
On October 24, 2006 3:31:41 PM -0400 Dale Ghent [EMAIL PROTECTED] wrote:
On Oct 24, 2006, at 3:23 PM, Frank Cusack wrote:
http://blogs.sun.com/roch/entry/when_to_and_not_to says a raid-z
vdev has the read throughput of 1 drive for random reads. Compared
to #drives for a stripe. That's pretty
Hi.
On nfs clients which are mounting file system f3-1/d611 I can see 3-5s
periods of 100% busy (iostat) and almost no IOs issued to nfs server, on nfs
server at the same time disk activity is almost 0 (both iostat and zpool
iostat). However CPU activity increases in SYS during that
Hello Matthew,
Tuesday, October 24, 2006, 3:36:13 AM, you wrote:
MA FYI, we're working on being able to shrink pools with no restrictions.
MA Unfortunately I don't have an ETA for you on this, though.
That's great!
MA And as I'm sure you know, you can always grow pools :-)
Well, partially -
Erik Trimble wrote:
Matthew Ahrens wrote:
Erik Trimble wrote:
The ability to expand (and, to a less extent, shrink) a RAIDZ or
RAIDZ2 device is actually one of the more critical missing features
from ZFS, IMHO. It is very common for folks to add additional shelf
or shelves into an existing
Frank Cusack wrote:
I don't think we know what the OP wanted. :-)
I understand the paranoia around overlapping raid levels - And yes they
are out to get you - but in the past some of the requirements were
around performance in a failure mode. Do we have any data concerning the
forgot to mention - after quota was lowered still no problem - everything works
ok.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Mark Shellenbaum wrote:
Chris Gerhard wrote:
I'm trying to create a directory hierarchy that when ever a file is
created it is created mode 664 with directories 775.
Now I can do this with chmod to create the ACL on UFS and it behaves
as expected howerver on ZFS it does not.
So what
Chris Gerhard wrote:
Mark Shellenbaum wrote:
Chris Gerhard wrote:
I'm trying to create a directory hierarchy that when ever a file is
created it is created mode 664 with directories 775.
Now I can do this with chmod to create the ACL on UFS and it behaves
as expected howerver on ZFS it
On 24/10/06, Eric Schrock [EMAIL PROTECTED] wrote:
On Tue, Oct 24, 2006 at 08:01:21PM +0100, Dick Davies wrote:
Shouldn't a ZFS share be permanently enabling NFS?
# svcprop -p application/auto_enable nfs/server
true
This property indicates that regardless of the current
Is this a known problem/bug?
$ zfs snapshot zpool/[EMAIL PROTECTED]
internal error: unexpected error 16 at line 2302 of ../common/libzfs_dataset.c
this occured on:
$ uname -a
SunOS azalin 5.10 Generic_118833-24 sun4u sparc SUNW,Sun-Blade-2500
This message posted from opensolaris.org
Hi.
If I create an Oracle volume using zfs like this
# zpool create -f oracle c0t1d0
# zfs create -V 500mb oracle/system.dbf
# cd /dev/zvol/rdsk/oracle
# chown oracle:oinstall system.dbf
Would it be similar to a vxvm raw volume like
/dev/vx/rdsk/oracle/system.dbf
On October 24, 2006 2:58:58 PM -0700 Thomas Maier-Komor
[EMAIL PROTECTED] wrote:
Is this a known problem/bug?
$ zfs snapshot zpool/[EMAIL PROTECTED]
internal error: unexpected error 16 at line 2302 of
../common/libzfs_dataset.c
I had this problem also. I think the answer was to unmount the
Hello,
I am not sure if I am posting in the correct forum, but it seems somewhat zfs
related, so I thought I'd share it.
While the machine was idle, I started a scrub. Around the time the scrubbing
was supposed to be finished, the machine panicked.
This might be related to the 'metadata
On 10/25/06, Siegfried Nikolaivich [EMAIL PROTECTED] wrote:
...
While the machine was idle, I started a scrub. Around the time the scrubbing
was supposed to be finished, the machine panicked.
This might be related to the 'metadata corruption' that happened earlier to me.
Here is the log, any
On 24-Oct-06, at 9:11 PM, James McPherson wrote:
this error from the marvell88sx driver is of concern, The 10b8b decode
and disparity error messages make me think that you have a bad piece
of hardware. I hope it's not your controller but I can't tell
without more
data. You should have a
Robert Milkowski wrote:
Hi.
On nfs clients which are mounting file system f3-1/d611 I can see 3-5s
periods of 100% busy (iostat) and almost no IOs issued to nfs server, on nfs
server at the same time disk activity is almost 0 (both iostat and zpool
iostat). However CPU activity increases
On 24-Oct-06, at 9:47 PM, James McPherson wrote:
On 10/25/06, Siegfried Nikolaivich [EMAIL PROTECTED] wrote:
And this is shown on the rest of the ports:
c0t?d0 Soft Errors: 6 Hard Errors: 0 Transport Errors: 0
Vendor: ATA Product: ST3320620AS Revision: CSerial No:
41 matches
Mail list logo