Re: [zfs-discuss] Re: zfs corrupted my data!

2006-11-28 Thread Matt Ingenthron

Christopher Scott wrote:


I currently have a server that has a much higher rate of checksum 
errors than what you describe to be "normal." I knew it wasn't good 
but I figured if zfs is fixing it for me why mess with it?


Is there anything I can do to troubleshoot where the problem might be 
coming from (aside from replacing hardware piece by piece) ? 
Other than correlations between where the errors are occuring and the 
physical paths to things?


You'll probably want to check with your vendor (or vendors) for the 
various components.  I don't know if yours is Sun hardware, but if so 
there's usually a test suite of some sort (VTS or others) that can 
exercise components individually and help to isolate such problems.  
This can be hard to do though, and not all problems can be caught.  
Still if it's Sun stuff and under contract/warranty, open a case and 
they'll give you some steps to diagnose (or in some cases, do it for you).


If it's a mix of vendors though, you may have some challenges as 
individual vendors may have specific requirements for any diagnosis 
tools, assuming they're available.  Having been through this with 
customers in the past, it can be quite a challenge.


Consider yourself lucky that zfs is catching/correcting things!

- Matt

--
Matt Ingenthron - Web Infrastructure Solutions Architect
Sun Microsystems, Inc. - Global Systems Practice
http://blogs.sun.com/mingenthron/
email: [EMAIL PROTECTED] Phone: 310-242-6439



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs corrupted my data!

2006-11-28 Thread Christopher Scott

Anton B. Rang wrote:

Clearly, the existence of a high error rate (say, more than one error every two 
weeks on a server pushing 100 MB/second) would point to a hardware or software 
problem; but fewer errors may simply be “normal” for standard hardware


I currently have a server that has a much higher rate of checksum errors 
than what you describe to be "normal." I knew it wasn't good but I 
figured if zfs is fixing it for me why mess with it?


Is there anything I can do to troubleshoot where the problem might be 
coming from (aside from replacing hardware piece by piece) ?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Production ZFS Server Death (06/06)

2006-11-28 Thread Anantha N. Srirama
Glad it worked for you. I suspect in your case the corruption happened way down 
in the tree and you could get around it by pruning the tree (rm the file) below 
the point of corruption. I suspect this could be due to a very localized 
corruption like Alpha particle problem where a bit was flipped on the platter 
or in the cache of the storage sub-system before destaging to disk. In our case 
the problem was pervasive due to the problems affecting our data path (FC).

[b]You do raise a very very valid point.[/b] It'd be nice if ZFS provided 
better diagnostics; namely identify where exactly in the tree it found 
corruption. At that point we can determine if the remedy is to contain our 
damage (similar to fsck discarding all suspect inodes) and continue.

For example I've very high regard for the space management in the Oracle DB. 
When it finds a bad block(s) it prints out the address of the block and marks 
it corrupt. [b]It doesn't put the whole file/tablespace/table/index in 
'suspect' mode like ZFS[/b]. DBA can then either drop the table/index that 
contains the bad block or extract data from the table minus the bad block. 
Oracle DB handles it very gracefully giving the user/DBA a chance to recover 
the known good data.

For ZFS to achieve wide acceptance we [b]must[/b] have the ability to pin point 
the problem area and take remedial action (rm for example) not simply give up. 
Yes there are times when the corruption could affect a block high up in the 
chain making the situation hopeless, in such a case we'd have to discard and 
restart. ZFS now has solved one part of the problem, namely identifying bad 
data and doing it reliably. It provides resiliency in the form form of 
Raid-Z(2) and Raid-1. For it to realize its full potential it must also provide 
tools to discard corrupt parts (branches) of the tree and give us a chance to 
save the remaining data. We won't always have the luxury of rebuilding the pool 
and restoring in a production environment.

Easier said than done, me thinks.

Good night.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Nicolas Williams
On Tue, Nov 28, 2006 at 08:03:33PM -0500, Toby Thain wrote:
> As others have pointed out, you wouldn't have reached this point with  
> redundancy - the file would have remained intact despite the hardware  
> failure. It is strictly correct that to restore the data you'd need  
> to refer to a backup, in this case.

Well, you could get really unlucky no matter how much redundancy you
have, but now we're splitting hairs :)  (The more redundancy, the worse
your luck has to be to be truly out of luck.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs hot spare not automatically getting used

2006-11-28 Thread Sanjeev Bagewadi

Jim,

James F. Hranicky wrote:


Sanjeev Bagewadi wrote:
 


Jim,

We did hit similar issue yesterday on build 50 and build 45 although the
node did not hang.
In one of the cases we saw that the hot spare was not of the same
size... can you check
if this true ?
   



It looks like they're all slightly different sizes.
 

Interestingly during our demo runs at the recent FOSS event 
(http://foss.in) we had no issues

with this (snv build 45). We had a RAIDZ config of 3 disks and 1 spare disk.
And what we found was that the spare kicked in.

Here is how we tried it :
- Plugged out one of the 3 disks
- Kicked of a write to the FS on the pool (ie. dd to a new file in the FS).
- The spare kicked in after a while. I guess there is some delay in the 
detection. I am not sure
 if there is some threshold beyond which it kicks in. Need to check the 
code for this.


 


Do you have a threadlist from the node when it was hung ? That would
reveal some info.
   



Unfortunately I don't. Do you mean the output of

::threadlist -v
 


Yes. That would be useful. Also, check the zpool status output.


from

mdb -k
 


Run the following :
# echo "::threadlist -v" | mdb -k > /var/tmp/threadlist.out

Regards,
Sanjeev.

--
Solaris Revenue Products Engineering,
India Engineering Center,
Sun Microsystems India Pvt Ltd.
Tel:x27521 +91 80 669 27521 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zfs corrupted my data!

2006-11-28 Thread Toby Thain


On 28-Nov-06, at 10:35 PM, Anton B. Rang wrote:


No, you still have the hardware problem.


What hardware problem?

There seems to be an unspoken assumption that any checksum error  
detected by ZFS is caused by a relatively high error rate in the  
underlying hardware.


There are at least two classes of hardware-related errors. One  
class are those which are genuinely being introduced at a high  
rate, as exemplified by the post earlier in this list about the bad  
FibreChannel port on a SAN. The other are those which are very rare  
events, for instance a radiation-induced bit-flip in SRAM. In this  
case, there is no “problem” as such to be repaired (well, perhaps  
if you live in Denver you could buy radiation shielding for your  
computer room ;-).


(There are also software errors. Errors in ZFS itself or anywhere  
else in the Solaris kernel, including device drivers, can result in  
erroneous data being written to disk. There may be a software  
problem, rather than a hardware problem, in any individual case.)


Clearly, the existence of a high error rate (say, more than one  
error every two weeks on a server pushing 100 MB/second) would  
point to a hardware or software problem; but fewer errors may  
simply be “normal” for standard hardware.


Her original configuration wasn't redundant, so she should expect  
this kind of manual recovery from time to time. Seems a logical  
conclusion to me? Or is this one of those once-in-a-lifetime strikes?


--Toby




This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs corrupted my data!

2006-11-28 Thread Anton B. Rang
> No, you still have the hardware problem.

What hardware problem?

There seems to be an unspoken assumption that any checksum error detected by 
ZFS is caused by a relatively high error rate in the underlying hardware.

There are at least two classes of hardware-related errors. One class are those 
which are genuinely being introduced at a high rate, as exemplified by the post 
earlier in this list about the bad FibreChannel port on a SAN. The other are 
those which are very rare events, for instance a radiation-induced bit-flip in 
SRAM. In this case, there is no “problem” as such to be repaired (well, perhaps 
if you live in Denver you could buy radiation shielding for your computer room 
;-).

(There are also software errors. Errors in ZFS itself or anywhere else in the 
Solaris kernel, including device drivers, can result in erroneous data being 
written to disk. There may be a software problem, rather than a hardware 
problem, in any individual case.)

Clearly, the existence of a high error rate (say, more than one error every two 
weeks on a server pushing 100 MB/second) would point to a hardware or software 
problem; but fewer errors may simply be “normal” for standard hardware.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-28 Thread David Dyer-Bennet

On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote:

Well, I fixed the HW but I had one bad file, and the problem was that ZFS
was saying "delete the pool and restore from tape" when, it turns out, the
answer is just find the file with the bad inode, delete it, clear the device
and scrub.  Maybe more of a documentation problme, but it sure is
disconcerting to have a file system threatening to give up the game over one
bad file (and the real irony: it was a file in someone's TRASH!)


The ZFS documentation was assuming you wanted to recover the data, not
abandon it.  Which, realistically, isn't always what people want; when
you know a small number of files are trashed, it's often easier to
delete those files and either just go on, or restore only those files,
compared to a full restore.   So yeah, perhaps the documentation could
be more helpful in that situation.


Anyway I'm back in business without a restore (and with a rebuilt RAID) but
yeesh, it sure took a lot of escalating to get to the point where someone
knew to tell me to do a find -inum.


Ah, if people here had realized that's what you needed to know, many
of us could have told you I'm sure.  I, at least, hadn't realized that
was one of the problem points.  (Probably too focused on the ZFS
content to think about the general issues enough!)

Very glad you're back in service, anyway!
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs corrupted my data!

2006-11-28 Thread Anton B. Rang
>> With zfs, there's this ominous
>> message saying "destroy the filesystem and restore
>> from tape". That's  not so good, for one corrupt
>> file.

> It is strictly correct that to restore the data you'd need
> to refer to a backup, in this case.

It is not, however, correct that to restore the data you need to destroy the 
entire file system and restore it. If we’re stating that the fix for a bad 
block in an individual data file is to reload the whole FS, there’s a 
documentation issue. We should say something more like “An unrecoverable error 
was found in ‘/homepool/johndoe/.login’. This file should be restored from 
backup.”
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-28 Thread Toby Thain


On 28-Nov-06, at 10:01 PM, Elizabeth Schwartz wrote:

Well, I fixed the HW but I had one bad file, and the problem was  
that ZFS was saying "delete the pool and restore from tape" when,  
it turns out, the answer is just find the file with the bad inode,  
delete it, clear the device and scrub.  Maybe more of a  
documentation problme, but it sure is disconcerting to have a file  
system threatening to give up the game over one bad file (and the  
real irony: it was a file in someone's TRASH!)


Anyway I'm back in business without a restore (and with a rebuilt  
RAID) but yeesh, it sure took a lot of escalating to get to the  
point where someone knew to tell me to do a find -inum.


Do you now have a redundant ZFS configuration, to prevent future data  
loss/inconvenience?

--T



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-28 Thread Elizabeth Schwartz

Well, I fixed the HW but I had one bad file, and the problem was that ZFS
was saying "delete the pool and restore from tape" when, it turns out, the
answer is just find the file with the bad inode, delete it, clear the device
and scrub.  Maybe more of a documentation problme, but it sure is
disconcerting to have a file system threatening to give up the game over one
bad file (and the real irony: it was a file in someone's TRASH!)

Anyway I'm back in business without a restore (and with a rebuilt RAID) but
yeesh, it sure took a lot of escalating to get to the point where someone
knew to tell me to do a find -inum.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Production ZFS Server Death (06/06)

2006-11-28 Thread Anantha N. Srirama
Oh my, one day after I posted my horror story another one strikes. This is 
validation of the design objectives of ZFS, looks like this type of stuff 
happens more often than not. In the past we'd have just attributed this type of 
problem to some application induced corruption, now ZFS is pinning this problem 
squarely on the storage sub-system.

If you didn't do any ZFS redundancy then your data is DONE as the support 
person indicated. Make sure you follow the instructions in the ZFS FAQ 
otherwise your server will end up in an endless 'panic-reboot cycle'.

Don't shoot the messenger (ZFS), consider running diags on your storage 
sub-system. Good luck.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Al Hopper
On Tue, 28 Nov 2006, Matthew Ahrens wrote:

> Elizabeth Schwartz wrote:
> > On 11/28/06, *David Dyer-Bennet* <[EMAIL PROTECTED]  > PROTECTED]>>
> > wrote:
> >
> > Looks to me like another example of ZFS noticing and reporting an
> > error that would go quietly by on any other filesystem.  And if you're
> > concerned with the integrity of the data, why not use some ZFS
> > redundancy?  (I'm guessing you're applying the redundancy further
> > downstream; but, as this situation demonstrates, separating it too far
> > from the checksum verification makes it less useful.)
> >
> >
> > Well, this error meant that two files on the file system were
> > inaccessible, and one user was completely unable to use IMAP, so I don't
> > know about unnoticeable.
>
> David said, "[the error] would go quietly by on any other filesystem".
> The point is that ZFS detected and reported the fact that your hardware
> corrupted the data.  A different filesystem would have simply given your
> application the incorrect data.
>
> > How would I use more redundancy?
>
> By creating a zpool with some redundancy, eg. 'zpool create poolname
> mirror disk1 disk2'.

Or a 3-way mirror As others have said, it's likely that your hardware
RAID box has hardware/firmware bugs, that up to now, have gone by
unnoticed.  It might also be time to upgrade the h/w RAID firmware and/or
the disk drive firmware for the drives that are installed in the h/w RAID
box.

In any case, ZFS will keep your h/w RAID supplier honest!  Or ... it's
quite possible that you hardware RAID provider does not know that there is
a (nasty) bug(s) present.  Think about that for a second  Not a
pleasant thought for a production system!

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
   Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
 OpenSolaris Governing Board (OGB) Member - Feb 2006
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-28 Thread Richard Elling

comment below...

Jason J. W. Williams wrote:

Hi Richard,

Originally, my thinking was I'd like drop one member out of a 3 member
RAID-Z and turn it into a RAID-1 zpool.

Although, at the moment I'm not sure.

Currently, I have 3 volume groups in my array with 4 disk each (total
12 disks). These VGs are sliced into 3 volumes each. I then have two
database servers using one LUN from each of the 3 VGs RAID-Z'd
together. For redundancy its great, for performance its pretty bad.

One of the major issues is the disk seek contention between the
servers since they're all using the same disks, and RAID-Z tries to
utilize all the devices it has access to on every write.

What I thought I'd move to was 6 RAID-1 VGs on the array, and assign
the VGs to each server via a 1 device striped zpool. However, given
the fact that ZFS will kernel panic in the event of bad data I'm
reconsidering how to lay it out.

Essentially I've got 12 disks to work with.

Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help
is much appreciated.


No such conversion-in-place is possible, today.

The concensus for databases, such as Oracle, is that you want your
logs on a different zpool than your data.  The simplest way to implement
this with redundancy is to mirror the log zpool.  You might try that
first, before you relayout the data.
 -- richard


Best Regards,
Jason

On 11/28/06, Richard Elling <[EMAIL PROTECTED]> wrote:

Jason J. W. Williams wrote:
> Is it possible to non-destructively change RAID types in zpool while
> the data remains on-line?

Yes.  With constraints, however.  What exactly are you trying to do?
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Toby Thain


On 28-Nov-06, at 7:02 PM, Elizabeth Schwartz wrote:


On 11/28/06, Frank Cusack <[EMAIL PROTECTED]> wrote:
I suspect this will be the #1 complaint about zfs as it becomes more
popular.  "It worked before with ufs and hw raid, now with zfs it says
my data is corrupt!  zfs sux0rs!"

That's not the problem, so much as "zfs says my file system is  
corrupt, how do I get past this?"


Yes, that's your problem right now. But Frank describes a likely  
general syndrome. :-)


With ufs, f'rinstance, I'd run an fsck, kiss the bad file(s)  
goodbye, and be on my way.


No, you still have the hardware problem.

With zfs, there's this ominous message saying "destroy the  
filesystem and restore from tape". That's  not so good, for one  
corrupt file.


As others have pointed out, you wouldn't have reached this point with  
redundancy - the file would have remained intact despite the hardware  
failure. It is strictly correct that to restore the data you'd need  
to refer to a backup, in this case.


And even better, turns out erasing the file might just be enough.  
Although in my case, I now have a new bad object. Sun pointed me to  
docs.sun.com (thanks, that helps!) but I haven't found anything in  
the docs on this so far. I am assuming that my bad object 45654c is  
an inode number of a special file of some sort, but what? And what  
does the range mean? I'd love to read the docs on htis


Problems will continue until your hardware is fixed. (Or you conceal  
them with a redundant ZFS configuration, but that would be a bad idea.)


--Toby








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-28 Thread Jason J. W. Williams

Hi Richard,

Originally, my thinking was I'd like drop one member out of a 3 member
RAID-Z and turn it into a RAID-1 zpool.

Although, at the moment I'm not sure.

Currently, I have 3 volume groups in my array with 4 disk each (total
12 disks). These VGs are sliced into 3 volumes each. I then have two
database servers using one LUN from each of the 3 VGs RAID-Z'd
together. For redundancy its great, for performance its pretty bad.

One of the major issues is the disk seek contention between the
servers since they're all using the same disks, and RAID-Z tries to
utilize all the devices it has access to on every write.

What I thought I'd move to was 6 RAID-1 VGs on the array, and assign
the VGs to each server via a 1 device striped zpool. However, given
the fact that ZFS will kernel panic in the event of bad data I'm
reconsidering how to lay it out.

Essentially I've got 12 disks to work with.

Anyway, long form of trying to convert from RAID-Z to RAID-1. Any help
is much appreciated.

Best Regards,
Jason

On 11/28/06, Richard Elling <[EMAIL PROTECTED]> wrote:

Jason J. W. Williams wrote:
> Is it possible to non-destructively change RAID types in zpool while
> the data remains on-line?

Yes.  With constraints, however.  What exactly are you trying to do?
  -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Elizabeth Schwartz

On 11/28/06, Frank Cusack <[EMAIL PROTECTED]> wrote:


I suspect this will be the #1 complaint about zfs as it becomes more
popular.  "It worked before with ufs and hw raid, now with zfs it says
my data is corrupt!  zfs sux0rs!"



That's not the problem, so much as "zfs says my file system is corrupt, how
do I get past this?" With ufs, f'rinstance, I'd run an fsck, kiss the bad
file(s) goodbye, and be on my way. With zfs, there's this ominous message
saying "destroy the filesystem and restore from tape". That's  not so good,
for one corrupt file. And even better, turns out erasing the file might just
be enough. Although in my case, I now have a new bad object. Sun pointed me
to docs.sun.com (thanks, that helps!) but I haven't found anything in the
docs on this so far. I am assuming that my bad object 45654c is an inode
number of a special file of some sort, but what? And what does the range
mean? I'd love to read the docs on htis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Convert Zpool RAID Types

2006-11-28 Thread Richard Elling

Jason J. W. Williams wrote:

Is it possible to non-destructively change RAID types in zpool while
the data remains on-line?


Yes.  With constraints, however.  What exactly are you trying to do?
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Frank Cusack

I suspect this will be the #1 complaint about zfs as it becomes more
popular.  "It worked before with ufs and hw raid, now with zfs it says
my data is corrupt!  zfs sux0rs!"

#2 how do i grow a raid-z.

The answers to these should probably be in a faq somewhere.  I'd argue
that the best practices guide is a good spot also, but the folks that
would actually find and read that would seem likely to already understand
that zfs detects errors other fs's wouldn't.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Convert Zpool RAID Types

2006-11-28 Thread Jason J. W. Williams

Hello,

Is it possible to non-destructively change RAID types in zpool while
the data remains on-line?

-J
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Richard Elling

Matthew Ahrens wrote:

Elizabeth Schwartz wrote:

How would I use more redundancy?


By creating a zpool with some redundancy, eg. 'zpool create poolname 
mirror disk1 disk2'.


after the fact, you can add a mirror using 'zpool attach'
 -- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread David Dyer-Bennet

On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote:

On 11/28/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote:
> Looks to me like another example of ZFS noticing and reporting an
> error that would go quietly by on any other filesystem.  And if you're
> concerned with the integrity of the data, why not use some ZFS
> redundancy?  (I'm guessing you're applying the redundancy further
> downstream; but, as this situation demonstrates, separating it too far
> from the checksum verification makes it less useful.)

Well, this error meant that two files on the file system were inaccessible,
and one user was completely unable to use IMAP, so I don't know about
unnoticeable.


Is that only because ZFS knows the file is corrupt, and refuses to
allow access?  Possibly another filesystem would happily serve
whatever bits happened to come back from the RaidKing, not knowing any
better.


How would I use more redundancy?


Create a RAIDZ or mirror vdev and put that in the pool, instead of
putting a single device in the pool.  You're probably counting on your
RaidKing to handle the RAID, and then running ZFS on top of that, but
this looks like a situation where the RaidKing had some kind of
unrecoverable (and possibly undetected) error.  Either it did, or else
ZFS corrupted the data all by itself.  One or the other, anyway, has
let you down seriously.  (I'm inclined to have considerable faith in
ZFS at this point, and am not familiar with RaidKing; what's needed is
some kind of actual evidence of which side failed, not guesses.)

Stacking them that way, running the redundancy in a separate layer
from ZFS, means you're not getting the full benefit from the
integration of filesystem, volume management, and redundancy handling
all into ZFS.  On the other hand, running on old hardware that's
heavily set up for offloading RAID activities from the CPU, that might
be the best available tradeoff; I'm not very familiar with such
hardware.
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Matthew Ahrens

Elizabeth Schwartz wrote:
On 11/28/06, *David Dyer-Bennet* <[EMAIL PROTECTED] > 
wrote:


Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem.  And if you're
concerned with the integrity of the data, why not use some ZFS
redundancy?  (I'm guessing you're applying the redundancy further
downstream; but, as this situation demonstrates, separating it too far
from the checksum verification makes it less useful.)


Well, this error meant that two files on the file system were 
inaccessible, and one user was completely unable to use IMAP, so I don't 
know about unnoticeable.


David said, "[the error] would go quietly by on any other filesystem". 
The point is that ZFS detected and reported the fact that your hardware 
corrupted the data.  A different filesystem would have simply given your 
application the incorrect data.



How would I use more redundancy?


By creating a zpool with some redundancy, eg. 'zpool create poolname 
mirror disk1 disk2'.


--matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Darren J Moffat

Jason J. W. Williams wrote:

Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?


Both do not only that but even if you have no redundancy in the pool you 
still use checksuming.  That is what has actually happened in this case. 
 The checksuming in ZFS has detected errors but due to the lack of 
redundancy at that ZFS layer it isn't able to correct them.  In a raidz 
or mirroring configuration correction is possible.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Bill Moore
They both use checksums and can provide self-healing data.


--Bill


On Tue, Nov 28, 2006 at 02:54:56PM -0700, Jason J. W. Williams wrote:
> Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?
> 
> Thanks in advance,
> J
> 
> On 11/28/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote:
> >On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote:
> >> So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it 
> >ran
> >> for three months, and it's had no hardware errors. But my zfs file system
> >> seems to have died a quiet death. Sun engineering response was to point 
> >to
> >> the FMRI, which says to throw out the zfs partition and start over. I'm 
> >real
> >> reluctant to do that, since it'll take hours to do a tape restore, and we
> >> don't know what's wrong.  I'm seriously wondering if I should just toss 
> >zfs.
> >> Again, this is Solaris 10 06/06, not some beta version. It's an older
> >> server, a 280R with an older SCSI RaidKing
> >
> >Looks to me like another example of ZFS noticing and reporting an
> >error that would go quietly by on any other filesystem.  And if you're
> >concerned with the integrity of the data, why not use some ZFS
> >redundancy?  (I'm guessing you're applying the redundancy further
> >downstream; but, as this situation demonstrates, separating it too far
> >from the checksum verification makes it less useful.)
> >--
> >David Dyer-Bennet, , 
> >RKBA: 
> >Pics: 
> >Dragaera/Steven Brust: 
> >___
> >zfs-discuss mailing list
> >zfs-discuss@opensolaris.org
> >http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Jason J. W. Williams

Do both RAID-Z and Mirror redundancy use checksums on ZFS? Or just RAID-Z?

Thanks in advance,
J

On 11/28/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote:

On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote:
> So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
> for three months, and it's had no hardware errors. But my zfs file system
> seems to have died a quiet death. Sun engineering response was to point to
> the FMRI, which says to throw out the zfs partition and start over. I'm real
> reluctant to do that, since it'll take hours to do a tape restore, and we
> don't know what's wrong.  I'm seriously wondering if I should just toss zfs.
> Again, this is Solaris 10 06/06, not some beta version. It's an older
> server, a 280R with an older SCSI RaidKing

Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem.  And if you're
concerned with the integrity of the data, why not use some ZFS
redundancy?  (I'm guessing you're applying the redundancy further
downstream; but, as this situation demonstrates, separating it too far
from the checksum verification makes it less useful.)
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Nicolas Williams
On Tue, Nov 28, 2006 at 03:02:59PM -0500, Elizabeth Schwartz wrote:
> So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
> for three months, and it's had no hardware errors. But my zfs file system
> seems to have died a quiet death. Sun engineering response was to point to
> the FMRI, which says to throw out the zfs partition and start over. I'm real
> reluctant to do that, since it'll take hours to do a tape restore, and we
> don't know what's wrong.  I'm seriously wondering if I should just toss zfs.
> Again, this is Solaris 10 06/06, not some beta version. It's an older
> server, a 280R with an older SCSI RaidKing

So you have a one device pool and that device is a RAID device of some
sort, and ZFS is getting errors from that device.  From ZFS' point of
view this is disastrous.  From your point of view this shouldn't happen
because your RAID device ought to save your bacon from single disk
failures (depending on how it's configured).

RAID devices aren't magical -- they can't detect certain kinds of errors
that ZFS can.  But ZFS can only recover from those errors -- provided it
itself is in charge of the RAIDing and there are enough remaining good
devices to reconstruct the correct data.  (Then there's ditto blocks,
but I don't recall the status of that, which would let you add some
degree of redundancy without having to add devices.)

The main reason for wanting to use hardware RAID with ZFS is to
significantly reduce the amount of I/O that ZFS has to do (for a 5 disk
RAID-5 we're talking about 5 times less I/O for the host to do), but
because this means that ZFS can't do combinatorial reconstruction of bad
disk data you want to then add mirroring (which also adds I/Os) so that
ZFS can cope with bad data from the RAID device.

How is your RAID device configured?  Does it have any diagnostics?

Or is your RAID device silently corrupting data?  If so ZFS saved you by
detecting that, but because you did not have enough redundancy (from
ZFS' point of view) ZFS can't actually reconstruct the correct data, and
so you lose.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Elizabeth Schwartz

On 11/28/06, David Dyer-Bennet <[EMAIL PROTECTED]> wrote:


Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem.  And if you're
concerned with the integrity of the data, why not use some ZFS
redundancy?  (I'm guessing you're applying the redundancy further
downstream; but, as this situation demonstrates, separating it too far
from the checksum verification makes it less useful.)



Well, this error meant that two files on the file system were inaccessible,
and one user was completely unable to use IMAP, so I don't know about
unnoticeable.

How would I use more redundancy?
thanks Betsy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread David Dyer-Bennet

On 11/28/06, Elizabeth Schwartz <[EMAIL PROTECTED]> wrote:

So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for three months, and it's had no hardware errors. But my zfs file system
seems to have died a quiet death. Sun engineering response was to point to
the FMRI, which says to throw out the zfs partition and start over. I'm real
reluctant to do that, since it'll take hours to do a tape restore, and we
don't know what's wrong.  I'm seriously wondering if I should just toss zfs.
Again, this is Solaris 10 06/06, not some beta version. It's an older
server, a 280R with an older SCSI RaidKing


Looks to me like another example of ZFS noticing and reporting an
error that would go quietly by on any other filesystem.  And if you're
concerned with the integrity of the data, why not use some ZFS
redundancy?  (I'm guessing you're applying the redundancy further
downstream; but, as this situation demonstrates, separating it too far
from the checksum verification makes it less useful.)
--
David Dyer-Bennet, , 
RKBA: 
Pics: 
Dragaera/Steven Brust: 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Production ZFS Server Death (06/06)

2006-11-28 Thread Elizabeth Schwartz

So I rebuilt my production mail server as Solaris 10 06/06 with zfs, it ran
for three months, and it's had no hardware errors. But my zfs file system
seems to have died a quiet death. Sun engineering response was to point to
the FMRI, which says to throw out the zfs partition and start over. I'm real
reluctant to do that, since it'll take hours to do a tape restore, and we
don't know what's wrong.  I'm seriously wondering if I should just toss zfs.
Again, this is Solaris 10 06/06, not some beta version. It's an older
server, a 280R with an older SCSI RaidKing

The sun engineer is escalating but thanks for any clues or thoughts. I don't
want to erase the evidence until we get some idea what's wrong.


zpool status -v
 pool: mailstore
state: ONLINE
status: One or more devices has experienced an error resulting in data
   corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
   entire pool from backup.
  see: http://www.sun.com/msg/ZFS-8000-8A
scrub: scrub completed with 1 errors on Tue Nov 28 12:29:18 2006
config:

   NAMESTATE READ WRITE CKSUM
   mailstore   ONLINE   0 0   250
 c1t4d0ONLINE   0 0   250

errors: The following persistent errors have been detected:

 DATASET OBJECT   RANGE
 mailstore/imap  4539004  0-8192
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: listing zpools by id?

2006-11-28 Thread Peter Buckingham
Peter Buckingham wrote:
> I've got myself into the situation where I have multiple pools with the
> same name (long story). How can I get the ids for these pools so i can
> address them individually and delete or import them.

never mind, as is always the case i figured this out just after hitting
send... zpool import gives this info.

peter
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] listing zpools by id?

2006-11-28 Thread Peter Buckingham
Hi All,

I've got myself into the situation where I have multiple pools with the
same name (long story). How can I get the ids for these pools so i can
address them individually and delete or import them.

thanks,

peter

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zfs hot spare not automatically getting used

2006-11-28 Thread Jim Hranicky
So is there a command to make the spare get used, or
so I have to remove it as a spare and add it if it doesn't
get automatically used?

Is this a bug to be fixed, or will this always be the case when
the disks aren't exactly the same size?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-28 Thread Eric Schrock
On Tue, Nov 28, 2006 at 06:06:24PM +, Ceri Davies wrote:
> 
> But you could presumably get that exact effect by not listing a
> filesystem in /etc/vfstab.
> 

Yes, but someone could still manually mount the filesystem using 'mount
-F zfs ...'.  If you set the mountpoint to 'none', then it cannot be
mounted, period.

Note that this predates the 'canmount' property.  Presumably you could
get the same behavior by doing 'mountpoint=legacy' and 'canmount=off'.
I'm not sure where the 'canmount' property is enforced, but I don't
think its checked in the kernel, so one could presumably avoid this
check by manually issuing a mount(2) syscall.

- Eric

--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS as root FS

2006-11-28 Thread Darren J Moffat

Lori Alt wrote:

OK, got the message.  We'll work on making the ON bits available ASAP.
The soonest we could putback is build 56, which should become available
to the OpenSolaris community in late January.  (Note that I'm not saying
that we WILL putback into build 56 because I won't know that until
I have approval to do so, but that's the soonest we could do it, given
restrictions on the earlier builds.)


Or you could have a zfs-boot project on opensolaris before you integrate 
however if you are that close to integrating it might just slow you down.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS as root FS

2006-11-28 Thread Lori Alt

Erast Benson wrote:

On Tue, 2006-11-28 at 09:46 +, Darren J Moffat wrote:
  

Lori Alt wrote:

Latest plan is to release zfs boot with U5.  It definitely isn't going 
to make U4.

We have new prototype bits, but they haven't been putback yet.  There are
a  number of design decisions that have hinged on syncing up our strategy
with other projects, or allowing some other projects to "gel".  Main
dependencies:  Xen, some sparc boot changes, and zones upgrade.  It's
coming together and I hope we can have some new bits putback shortly
after the first of the year.
  
Any chance of you setting up a repository on OpenSolaris.org with the 
prototype bits in source so that people can build them and test them out ?


For some of us the most interesting part of this is the bits in ON not 
the installer bits - particularly those people interested in building 
their own distros of OpenSolaris.



+1

SchiliX, BeleniX, Nexenta, Martux have their own installers and boot
environments anyways, so would be *really* nice if you guys could open
up zfs root ON bits.

  


OK, got the message.  We'll work on making the ON bits available ASAP.
The soonest we could putback is build 56, which should become available
to the OpenSolaris community in late January.  (Note that I'm not saying
that we WILL putback into build 56 because I won't know that until
I have approval to do so, but that's the soonest we could do it, given
restrictions on the earlier builds.)

Lori

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-28 Thread Ceri Davies
On Tue, Nov 28, 2006 at 06:08:23PM +0100, Terence Patrick Donoghue wrote:
> Dick Davies wrote On 11/28/06 17:15,:
> 
> >Is there a difference between setting mountpoint=legacy and 
> >mountpoint=none?

> Is there a difference - Yep,
> 
> 'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and 
> options
> whereas
> 'none' tells ZFS not to mount the ZFS filesystem at all. Then you would 
> need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint 
> poolname/fsname' to get it mounted.
> 
> In a nutshell, setting 'none' means that 'zfs mount -a' wont mount the 
> FS cause there is no mount point specified anywhere

But you could presumably get that exact effect by not listing a
filesystem in /etc/vfstab.

Ceri
-- 
That must be wonderful!  I don't understand it at all.
  -- Moliere


pgpt0pV4zLQzt.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 'legacy' vs 'none'

2006-11-28 Thread Terence Patrick Donoghue

Is there a difference - Yep,

'legacy' tells ZFS to refer to the /etc/vfstab file for FS mounts and 
options

whereas
'none' tells ZFS not to mount the ZFS filesystem at all. Then you would 
need to manually mount the ZFS using 'zfs set mountpoint=/mountpoint 
poolname/fsname' to get it mounted.


In a nutshell, setting 'none' means that 'zfs mount -a' wont mount the 
FS cause there is no mount point specified anywhere



Dick Davies wrote On 11/28/06 17:15,:

Is there a difference between setting mountpoint=legacy and 
mountpoint=none?



Hope that helps,

Sincerely,

--
Terence Donoghue
Senior Engineer
Sun Microsystems
+41 44 908 9000

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS as root FS

2006-11-28 Thread Erast Benson
On Tue, 2006-11-28 at 09:46 +, Darren J Moffat wrote:
> Lori Alt wrote:
> > Latest plan is to release zfs boot with U5.  It definitely isn't going 
> > to make U4.
> > We have new prototype bits, but they haven't been putback yet.  There are
> > a  number of design decisions that have hinged on syncing up our strategy
> > with other projects, or allowing some other projects to "gel".  Main
> > dependencies:  Xen, some sparc boot changes, and zones upgrade.  It's
> > coming together and I hope we can have some new bits putback shortly
> > after the first of the year.
> 
> Any chance of you setting up a repository on OpenSolaris.org with the 
> prototype bits in source so that people can build them and test them out ?
> 
> For some of us the most interesting part of this is the bits in ON not 
> the installer bits - particularly those people interested in building 
> their own distros of OpenSolaris.

+1

SchiliX, BeleniX, Nexenta, Martux have their own installers and boot
environments anyways, so would be *really* nice if you guys could open
up zfs root ON bits.

-- 
Erast

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: 'legacy' vs 'none'

2006-11-28 Thread Dick Davies

Just spotted one - is this intentional?

You can't delegate a dataset to a zone if mountpoint=legacy.
Changing it to 'none' works fine.


  vera / # zfs create tank/delegated
  vera / # zfs get mountpoint tank/delegated
  NAMEPROPERTYVALUE   SOURCE
  tank/delegated  mountpoint  legacy  inherited from tank
  vera / # zfs create tank/delegated/ganesh
  vera / # zfs get mountpoint tank/delegated/ganesh
  NAME   PROPERTYVALUE  SOURCE
  tank/delegated/ganesh  mountpoint  legacy inherited from tank
  vera / # zonecfg -z ganesh
  zonecfg:ganesh> add dataset
  zonecfg:ganesh:dataset> set name=tank/delegated/ganesh
  zonecfg:ganesh:dataset> end
  zonecfg:ganesh> commit
  zonecfg:ganesh> exit
  vera / # zoneadm -z ganesh boot
  could not verify zfs dataset tank/delegated/ganesh: mountpoint
cannot be inherited
  zoneadm: zone ganesh failed to verify
  vera / # zfs set mountpoint=none tank/delegated/ganesh
  vera / # zoneadm -z ganesh boot
  vera / #


On 28/11/06, Dick Davies <[EMAIL PROTECTED]> wrote:

Is there a difference between setting mountpoint=legacy and mountpoint=none?


--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 'legacy' vs 'none'

2006-11-28 Thread Dick Davies

Is there a difference between setting mountpoint=legacy and mountpoint=none?

--
Rasputin :: Jack of All Trades - Master of Nuns
http://number9.hellooperator.net/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: ZFS as root FS

2006-11-28 Thread Darren J Moffat

Lori Alt wrote:
Latest plan is to release zfs boot with U5.  It definitely isn't going 
to make U4.

We have new prototype bits, but they haven't been putback yet.  There are
a  number of design decisions that have hinged on syncing up our strategy
with other projects, or allowing some other projects to "gel".  Main
dependencies:  Xen, some sparc boot changes, and zones upgrade.  It's
coming together and I hope we can have some new bits putback shortly
after the first of the year.


Any chance of you setting up a repository on OpenSolaris.org with the 
prototype bits in source so that people can build them and test them out ?


For some of us the most interesting part of this is the bits in ON not 
the installer bits - particularly those people interested in building 
their own distros of OpenSolaris.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Size of raidz

2006-11-28 Thread Marlanne DeLaSource
I can understand that ls is giving you the "logical" size of the file and du 
the "physical" size of the file (in-disk footprint).

But then, how do you explain that, when using a mirrored pool, ls and du 
returns exactly the same size. According to your reasoning, du should return 
twice the logical size returned by ls.

The main problem (my opinion) is the lack of consistency.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss