Anybody can tell me about RAID-Z architecture , cuz i tried to understand it by
seach in google but it doesn't clear. I don't know why it can beat raid-5. I
know it can prove about RAID-5 Write Hole cuz it has copy-on-write feature for
data intigrity. But i don't understand about write full stri
Please don't do this as a rule, it makes for horrendous support issues
and breaks a lot of health check tools.
>> Actually, you can use the existing name space for this. By default,
>> ZFS uses /dev/dsk. But everything in /dev is a symlink. So you could
>> setup your own space, say /dev/mykno
On Tue, Sep 25, 2007 at 06:01:00PM -0700, Vincent Fox wrote:
> I don't understand. How do you
>
> "setup one LUN that has all of the NVRAM on the array dedicated to it"
>
> I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
> thick here, but can you be more specific for the n00b?
On 9/25/07, Gregory Shaw <[EMAIL PROTECTED]> wrote:
>
>
>
> On Sep 25, 2007, at 7:09 PM, Richard Elling wrote:
>
> Dale Ghent wrote:
> On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
> The problem with this is that wrong information is much worse than no
> information, there is no way to automat
On Sep 25, 2007, at 7:09 PM, Richard Elling wrote:
Dale Ghent wrote:
On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
The problem with this is that wrong information is much worse
than no
information, there is no way to automatically validate the
information,
and therefore people are in
Dale Ghent wrote:
> On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
>
>> The problem with this is that wrong information is much worse than no
>> information, there is no way to automatically validate the
>> information,
>> and therefore people are involved. If people were reliable, then eve
I don't understand. How do you
"setup one LUN that has all of the NVRAM on the array dedicated to it"
I'm pretty familiar with 3510 and 3310. Forgive me for being a bit
thick here, but can you be more specific for the n00b?
Do you mean from firmware side or OS side? Or since the LUNs used
for
On Sep 25, 2007, at 5:48 PM, Richard Elling wrote:
Greg Shaw wrote:
James C. McPherson wrote:
Bill Sommerfeld wrote:
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
How would you gather that information?
the tools to use would be dependant on the actual storage device
in u
> We need high availability, so are looking at Sun
> Cluster. That seems to add
> an extra layer of complexity , but there's no
> way I'll get signoff on
> a solution without redundancy. It would appear that
> ZFS failover is
> supported with the latest version of Solaris/Sun
> Cluster? I was speak
Just did #zpool upgrade -a, it was already ZFS 4, but anyway.
Nothing changed. Any ideas ?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> The SE also told me that Sun Cluster requires
> hardware raid, which
> conflicts with the general recommendation to feed ZFS
> raw disk. It seems
> such a configuration would either require configuring
> zdevs directly on the
> raid LUNs, losing ZFS self-healing and checksum
> correction fe
On Tue, Sep 25, 2007 at 04:47:48PM -0700, Vincent Fox wrote:
> It seems like ZIL is a separate issue.
It is very much the issue: the seperate log device work was done exactly
to make better use of this kind of non-volatile memory. To use this, setup
one LUN that has all of the NVRAM on the arra
On Sep 25, 2007, at 7:48 PM, Richard Elling wrote:
>
> The problem with this is that wrong information is much worse than no
> information, there is no way to automatically validate the
> information,
> and therefore people are involved. If people were reliable, then even
> a text file would wo
Greg Shaw wrote:
> James C. McPherson wrote:
>> Bill Sommerfeld wrote:
>>
>>> On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
>>>
How would you gather that information?
>>> the tools to use would be dependant on the actual storage device in use.
>>> luxadm for
It seems like ZIL is a separate issue.
I have read that putting ZIL on a separate device helps, but what about the
cache?
OpenSolaris has some flag to disable it. Solaris 10u3/4 do not. I have
dual-controllers with NVRAM and battery backup, why can't I make use of it?
Would I be wasting my
On Sep 25, 2007, at 7:21 PM, James C. McPherson wrote:
>
> That sounds like an ok RFE to me.
>
> For some of the arrays (eg HDS) that we come
> into contact with, it's possible to decode the
> device guid into something meaningful to a
> human, but that's generally closed information.
To me, this
Greg Shaw wrote:
> James C. McPherson wrote:
>> Bill Sommerfeld wrote:
>>
>>> On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
>>>
How would you gather that information?
>>> the tools to use would be dependant on the actual storage device in use.
>>> luxadm for A5
James C. McPherson wrote:
> Bill Sommerfeld wrote:
>
>> On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
>>
>>> How would you gather that information?
>>>
>> the tools to use would be dependant on the actual storage device in use.
>> luxadm for A5x00 and V8x0 internal s
It would be a manual process. As with any arbitrary name, it's a useful
tag, not much more.
James C. McPherson wrote:
> Gregory Shaw wrote:
>
>> Hi. I'd like to request a feature be added to zfs. Currently, on
>> SAN attached disk, zpool shows up with a big WWN for the disk. If
>> ZFS
Bill Sommerfeld wrote:
> On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
>> How would you gather that information?
>
> the tools to use would be dependant on the actual storage device in use.
> luxadm for A5x00 and V8x0 internal storage, sccli for 3xxx, etc., etc.,
No consistent int
io:::start probe does not seem to get zfs filenames in
args[2]->fi_pathname. Any ideas how to get this info?
-neel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Tue, 25 Sep 2007, Peter Tribble wrote:
> This was some time ago (a very long time ago, actually). There are two
> fundamental problems:
>
> 1. Each zfs filesystem consumes kernel memory. Significant amounts, 64K
> is what we worked out at the time. For normal numbers of filesystems that's
> not
On Wed, 2007-09-26 at 08:26 +1000, James C. McPherson wrote:
> How would you gather that information?
the tools to use would be dependant on the actual storage device in use.
luxadm for A5x00 and V8x0 internal storage, sccli for 3xxx, etc., etc.,
> How would you ensure that it stayed accurate in
On Mon, 24 Sep 2007, Dale Ghent wrote:
> Not to sway you away from ZFS/NFS considerations, but I'd like to add
> that people who in the past used DFS typically went on to replace it with
> AFS. Have you considered it?
You're right, AFS is the first choice coming to mind when replacing DFS. We
act
Tim Spriggs wrote:
> James C. McPherson wrote:
>> Gregory Shaw wrote:
...
>>> The above would be very useful should a disk fail to identify what
>>> device is what.
>> How would you gather that information?
>> How would you ensure that it stayed accurate in
>> a hotplug world?
> If it is stored o
James C. McPherson wrote:
> Gregory Shaw wrote:
>
>> Hi. I'd like to request a feature be added to zfs. Currently, on
>> SAN attached disk, zpool shows up with a big WWN for the disk. If
>> ZFS (or the zpool command, in particular) had a text field for
>> arbitrary information, it woul
Gregory Shaw wrote:
> Hi. I'd like to request a feature be added to zfs. Currently, on
> SAN attached disk, zpool shows up with a big WWN for the disk. If
> ZFS (or the zpool command, in particular) had a text field for
> arbitrary information, it would be possible to add something that
Hi. I'd like to request a feature be added to zfs. Currently, on
SAN attached disk, zpool shows up with a big WWN for the disk. If
ZFS (or the zpool command, in particular) had a text field for
arbitrary information, it would be possible to add something that
would indicate what LUN on
On 9/24/07, Paul B. Henson <[EMAIL PROTECTED]> wrote:
> On Sat, 22 Sep 2007, Peter Tribble wrote:
>
> > filesystem per user on the server, just to see how it would work. While
> > managing 20,00 filesystems with the automounter was trivial, the attempt
> > to manage 20,000 zfs filesystems wasn't en
On Tue, 2007-09-25 at 10:14 -0700, Vincent Fox wrote:
> Where is ZFS with regards to the NVRAM cache present on arrays?
>
> I have a pile of 3310 with 512 megs cache, and even some 3510FC with
> 1-gig cache. It seems silly that it's going to waste. These are
> dual-controller units so I have no
On Tue, Sep 25, 2007 at 10:14:57AM -0700, Vincent Fox wrote:
> Where is ZFS with regards to the NVRAM cache present on arrays?
>
> I have a pile of 3310 with 512 megs cache, and even some 3510FC with 1-gig
> cache. It seems silly that it's going to waste. These are dual-controller
> units so I
Where is ZFS with regards to the NVRAM cache present on arrays?
I have a pile of 3310 with 512 megs cache, and even some 3510FC with 1-gig
cache. It seems silly that it's going to waste. These are dual-controller
units so I have no worry about loss of cache information.
It looks like OpenSola
Paul B. Henson wrote:
> But all quotas were set in a single flat text file. Anytime you added a new
> quota, you needed to turn off quotas, then turn them back on, and quota
> enforcement was disabled while it recalculated space utilization.
I believe in later versions of the OS 'quota resize' di
On 9/25/07 3:37 AM, "Sergiy Kolodka" <[EMAIL PROTECTED]>
wrote:
> Hi Guys,
>
> I'm playing with Blade 6300 to check performance of compressed ZFS with Oracle
> database.
> After some really simple tests I noticed that default (well, not really
> default, some patches applied, but definitely noo
Hi Jason, This should have helped.
6542676 ARC needs to track meta-data memory overhead
Some of the lines to arc.c:
1551 1.36 if (arc_meta_used >= arc_meta_limit) {
1552/*
1553 * We are exceeding our meta-data cache l
Hi Guys,
I'm playing with Blade 6300 to check performance of compressed ZFS with Oracle
database.
After some really simple tests I noticed that default (well, not really
default, some patches applied, but definitely noone bother to tweak disk
subsystem or something else) installation of S10U3 i
36 matches
Mail list logo