Re: [zfs-discuss] ZFS Bad Blocks Handling

2007-08-29 Thread Joerg Schilling
Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:

> On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote:
> > Hi,
> > 
> > Does ZFS flag blocks as bad so it knows to avoid using them in the future?
>
> No it doesn't. This would be a really nice feature to have, but
> currently when ZFS tries to write to a bad sector it simply tries few
> times and gives up. With COW model this shouldn't be very hard to try to
> use another block and mark this one as bad, but it's not yet
> implemented.

Bad block handling was needed before 1985, when the hardware did not support
to map bad block. Even at that time, it was done in the disk dricer and not in
the filesystem (except for FAT).

Jörg

-- 
 EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 Berlin
   [EMAIL PROTECTED](uni)  
   [EMAIL PROTECTED] (work) Blog: http://schily.blogspot.com/
 URL:  http://cdrecord.berlios.de/old/private/ ftp://ftp.berlios.de/pub/schily
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Bad Blocks Handling

2007-08-28 Thread Richard Elling
RL wrote:
> Hi,
> 
> Does ZFS flag blocks as bad so it knows to avoid using them in the future?
> 
> During testing I had huge numbers of unrecoverable checksum errors, which I 
> resolved by disabling write caching on the disks.

Were the errors logged during writes, or during reads?
Can you share the error messages (ASC/ASCQ)?
Can you tell us what the hardware was so that we can avoid buying it?
  -- richard

> After doing this, and confirming the errors had stopped occuring, I removed 
> the test files. A few seconds after removing the test files, I noticed the 
> used space dropped from 16GB to 11GB according to 'df', but it did not appear 
> to ever drop below this value.
> 
> Is this just normal file system overhead (This is a raidz with 8x 500GB 
> drives), or has ZFS not freed some of the space allocated to bad files?
> 
> If ZFS is holding on to this space because it thinks it might be bad, is 
> there a way to tell it that it is okay to use it?
> 
> I am using ZFS on FreeBSD, which from what I've read has had minimal 
> modification done to the source to make it work on that platform. 
> Unfortunately my hardware for booting with is not supported by Solaris, which 
> is where the majority of experience with ZFS is at this point.
> 
> Thanks!
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Bad Blocks Handling

2007-08-28 Thread Pawel Jakub Dawidek
On Mon, Aug 27, 2007 at 10:00:10PM -0700, RL wrote:
> Hi,
> 
> Does ZFS flag blocks as bad so it knows to avoid using them in the future?

No it doesn't. This would be a really nice feature to have, but
currently when ZFS tries to write to a bad sector it simply tries few
times and gives up. With COW model this shouldn't be very hard to try to
use another block and mark this one as bad, but it's not yet
implemented.

> During testing I had huge numbers of unrecoverable checksum errors, which I 
> resolved by disabling write caching on the disks.
> 
> After doing this, and confirming the errors had stopped occuring, I removed 
> the test files. A few seconds after removing the test files, I noticed the 
> used space dropped from 16GB to 11GB according to 'df', but it did not appear 
> to ever drop below this value.
> 
> Is this just normal file system overhead (This is a raidz with 8x 500GB 
> drives), or has ZFS not freed some of the space allocated to bad files?

Can you retry your test without write cache starting from recreating the
pool?

-- 
Pawel Jakub Dawidek   http://www.wheel.pl
[EMAIL PROTECTED]   http://www.FreeBSD.org
FreeBSD committer Am I Evil? Yes, I Am!


pgpFBsjIFy6F3.pgp
Description: PGP signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS Bad Blocks Handling

2007-08-27 Thread RL
Hi,

Does ZFS flag blocks as bad so it knows to avoid using them in the future?

During testing I had huge numbers of unrecoverable checksum errors, which I 
resolved by disabling write caching on the disks.

After doing this, and confirming the errors had stopped occuring, I removed the 
test files. A few seconds after removing the test files, I noticed the used 
space dropped from 16GB to 11GB according to 'df', but it did not appear to 
ever drop below this value.

Is this just normal file system overhead (This is a raidz with 8x 500GB 
drives), or has ZFS not freed some of the space allocated to bad files?

If ZFS is holding on to this space because it thinks it might be bad, is there 
a way to tell it that it is okay to use it?

I am using ZFS on FreeBSD, which from what I've read has had minimal 
modification done to the source to make it work on that platform. Unfortunately 
my hardware for booting with is not supported by Solaris, which is where the 
majority of experience with ZFS is at this point.

Thanks!
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss