[zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Frank Cusack

I suspect this will be the #1 complaint about zfs as it becomes more
popular.  It worked before with ufs and hw raid, now with zfs it says
my data is corrupt!  zfs sux0rs!

#2 how do i grow a raid-z.

The answers to these should probably be in a faq somewhere.  I'd argue
that the best practices guide is a good spot also, but the folks that
would actually find and read that would seem likely to already understand
that zfs detects errors other fs's wouldn't.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Elizabeth Schwartz

On 11/28/06, Frank Cusack [EMAIL PROTECTED] wrote:


I suspect this will be the #1 complaint about zfs as it becomes more
popular.  It worked before with ufs and hw raid, now with zfs it says
my data is corrupt!  zfs sux0rs!



That's not the problem, so much as zfs says my file system is corrupt, how
do I get past this? With ufs, f'rinstance, I'd run an fsck, kiss the bad
file(s) goodbye, and be on my way. With zfs, there's this ominous message
saying destroy the filesystem and restore from tape. That's  not so good,
for one corrupt file. And even better, turns out erasing the file might just
be enough. Although in my case, I now have a new bad object. Sun pointed me
to docs.sun.com (thanks, that helps!) but I haven't found anything in the
docs on this so far. I am assuming that my bad object 45654c is an inode
number of a special file of some sort, but what? And what does the range
mean? I'd love to read the docs on htis
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Toby Thain


On 28-Nov-06, at 7:02 PM, Elizabeth Schwartz wrote:


On 11/28/06, Frank Cusack [EMAIL PROTECTED] wrote:
I suspect this will be the #1 complaint about zfs as it becomes more
popular.  It worked before with ufs and hw raid, now with zfs it says
my data is corrupt!  zfs sux0rs!

That's not the problem, so much as zfs says my file system is  
corrupt, how do I get past this?


Yes, that's your problem right now. But Frank describes a likely  
general syndrome. :-)


With ufs, f'rinstance, I'd run an fsck, kiss the bad file(s)  
goodbye, and be on my way.


No, you still have the hardware problem.

With zfs, there's this ominous message saying destroy the  
filesystem and restore from tape. That's  not so good, for one  
corrupt file.


As others have pointed out, you wouldn't have reached this point with  
redundancy - the file would have remained intact despite the hardware  
failure. It is strictly correct that to restore the data you'd need  
to refer to a backup, in this case.


And even better, turns out erasing the file might just be enough.  
Although in my case, I now have a new bad object. Sun pointed me to  
docs.sun.com (thanks, that helps!) but I haven't found anything in  
the docs on this so far. I am assuming that my bad object 45654c is  
an inode number of a special file of some sort, but what? And what  
does the range mean? I'd love to read the docs on htis


Problems will continue until your hardware is fixed. (Or you conceal  
them with a redundant ZFS configuration, but that would be a bad idea.)


--Toby








___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corrupted my data!

2006-11-28 Thread Nicolas Williams
On Tue, Nov 28, 2006 at 08:03:33PM -0500, Toby Thain wrote:
 As others have pointed out, you wouldn't have reached this point with  
 redundancy - the file would have remained intact despite the hardware  
 failure. It is strictly correct that to restore the data you'd need  
 to refer to a backup, in this case.

Well, you could get really unlucky no matter how much redundancy you
have, but now we're splitting hairs :)  (The more redundancy, the worse
your luck has to be to be truly out of luck.)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss