On Mon, 30 May 2005 08:17:00 +0200, Matthias Barremaecker said:

> I did a bad block check and I have 10 bad blocks of 4096bytes on 1300Gig 
> and ... that is the reason reiserfs will not work anymore.

> I guess this sux. I rather have that the data on the bad blocks is just 
> corupted but the rest is accesseble.

It all depends on which 10 blocks go bad.  If it's a block that's allocated to
a file, you lose the 4K or whatever that's in that block.

If it's a block that an inode lives in, you're probably going to have the
entire file evaporate.

If it's a block that contains something even more important, you're going to
have large sections of the file system evaporate.

It's a tradeoff issue - how many times do you replicate metadata on the
filesystem, against how well the file system deals with errors.  The problem is
that if you just say "let's have 2 copies of everything, just in case", it
takes a lot more disk space to *store* 2 copies of the metadata.  Also, your
disk performance falls through the floor - most journalled filesystems have
enough trouble making sure that *one* copy of things like the free list is on
disk and consistent with the journal.  Making 2 copies is going to probably
triple your disk I/O and complicate matters a *lot* for fsck (if you crash and
the two copies aren't consistent, which one do you believe?)

That's why almost all filesystems designers just punt and assume that the media
actually works, and suggest if your media might not be 100% reliable, that you
use RAID or similar solutions....

Attachment: pgpSnF34gSSOu.pgp
Description: PGP signature

Reply via email to