On Thursday 29 March 2007 03:09:33 Remy Blank wrote:
> Boyd Stephen Smith Jr. wrote:
> >> <troll>
> >> ZFS?
> >> </troll>
> >
> > You say troll, I say possibility; I'll certainly consider it.
>
> Actually, I would be very interested in using ZFS for my data.
>
> The "troll" was more about the fact that the ZFS license was explicitly
> designed to be GPL-2 incompatible, hence preventing it from being
> included into Linux (it would require a clean-room rewrite from the 
specs).
>
> > However, the demos that I've seen about ZFS stress how easy it is to
> > administer, and all the LVM-style features it has.  Personally,
> > I've /very/ comfortable with LVM and am of the opinion that such 
features
> > don't actually belong at the "filesystem" layer.
>
> I haven't made the step to LVM and am still using a plain old RAID-1
> mirror. I'm not that comfortable adding one more layer to the data path,
> and one more difficulty in case of hard disk failure.
>
> > I need to good general purpose filesystem, what matters most to be is:
> > 1) Online growing of the filesystem, with LVM I use this a lot, I won't
> > consider a filesystem I can't grow while it is in active use.
> > 2) Journaling or other techniques (FFS from the *BSD world does 
something
> > they don't like to call journaling) that reduce the frequency of full
> > fscks.
> > 3) All-round performance, and I don't mind it using extra CPU time or
> > memory to make filesystem performance better, I have both to spare.
> > 4) Storage savings (like tail packing or transparent compression)
>
> I completely agree with 1) and 2), and 3) and 4) are nice to haves. What
> I like in ZFS is the data integrity check, i.e. every block gets a
> checksum, and it can auto-repair in a RAID-Z configuration, something
> that RAID-1 cannot.

RAID-3?/5/6 can self-repair like this, but the checksumming is done at the 
stripe, rather than inode level.  Since I use HW RAID-6 across 10 drives, 
I'm 
not that concerned with this done at the filesystem level.  Even without 
the 
extra disks, you can use SW RAID across partitions on a single (or small 
number of) disk(s).  [(Ab)uses of SW RAID like this are not something I'd 
always recommend, but can provide the integrity checks you desire.]

Also, EVMS provides a BBR (bad block relocatation) target, that can work 
around isolated disk failures.

> 5) Reliable data integrity checks and self-healing capability.

Overall, I see this as something I'd rather see done at the block device 
level, instead of the filesystem level.  Surely, a filesystem should not 
shy 
away from sanity checks that can be done with little overhead besides CPU 
time, but adding a checksum to each block might be a little overkill.

-- 
Boyd Stephen Smith Jr.                     ,= ,-_-. =. 
[EMAIL PROTECTED]                      ((_/)o o(\_))
ICQ: 514984 YM/AIM: DaTwinkDaddy           `-'(. .)`-' 
http://iguanasuicide.org/                      \_/     

Attachment: pgpgpksWoEzna.pgp
Description: PGP signature

Reply via email to