OK, understood now, i think: you agree with temporarily loosing a bit of unreclaimed free-space on disk until time permits cleaning things up properly, afaiu softupdates (+journalling ? not really clear).

That it. And that's how original softupdates document describe it.
You may run quite safely without fsck, just not abuse that feature for too long!

No journalling. I am currently FreeBSD user, FreeBSD 9 added softupdates journalling, but REALLY it doesn't change much except extra writes on disk.

I found that you actually have to run full fsck now and then even with journalling. In theory it shouldn't find any inconsistences, in practice it always find minor ones.

As to end that topic my practices are:

- do not make huge filesystems or create large "RAID" arrays.
2 disk, one mirror from them, one filesystem.
- it takes like 30 minutes or less to fsck it, and the same time for 10 such filesystems as it can go in parallel.

in case of crash i do fsck manually when pause isn't a problem.
at reboot i check only root filesystem, and (if it's separate) /usr, so i could execute all other checks remotely without rebooting.

Assuming hardware never fails is certainly wrong

And there's no practical point assuming it *always* fails, is there ?

Just that it fails sometimes is enough to assume it can.

could any FS be trustworthy then ??? IMHO, that's nonsense.
No it isn't. Sorry if i wasn't clear enough to explain it.

Well, if the thing that you try to defend against is plain hardware failure (memory bits flipping, CPU going mad, whatever), i just doubt that any kind of software layer could definitely solve it (checksums of checksums of? i/o

You are completely right.

What i point out that flat data layout makes chance of recovery far higher and chance of bad destruction far lower.

Any Tree-like structure produces a huge risk of losing much more data that was corrupted at first place.


That rule already prove true for UFS filesystem, as well for eg. fixed size database tables like .DBF format which i still use, not "modern" ones.

Using DBF files as example - you have indexes in separate files, but indexes are not crucial and can be rebuild.

So if any tree like structure (or hash type or whatever) would be invented to speed up filesystem access - great. But only as extra index, with crucial data (INODES!!) written as flat fixed record table at known place.

I don't say HAMMER is bad - contrary to ZFS which is 100% PURE SHIT(R) - but i don't agree it is (or will be) a total replacement of older UFS.

Hammer pseudo-filesystems, snapshots and on like replication are useful features but actually not that needed for everyone, and not without cost of extra complexity. No matter how smart Matthew Dillon is, it still be far more complex, and more risky.

That's why it's not good that swapcache doesn't support efficient caching of UFS as there are no vfs.ufs.double_buffer feature just like hammer.



-----------------------------------------------------------------
Disclaimer ;): None of my practices, my ideas about safe filesystems, mirroring or anything else are not replacement of proper backup strategy!!! Please do not interpret anything i write about ideas against backups.

Reply via email to