> Why would btrfs be inferior to ZFS on multiple disks?  I can't see how
> its architecture would do any worse, and the planned features are
> superior to ZFS (which isn't to say that ZFS can't improve either).

ZFS uses ARC as its page replacement algorithm, which is superior to
the LRU page replacement algorithm used by btrfs. ZFS has L2ARC and
SLOG. L2ARC permits things that would not be evacuated from ARC had it
been bigger to be stored in a Level 2 cache. SLOG permits writes to be
stored in memory before they are committed to the disks. This provides
the benefits of write sequentialization and protection against data
inconsistency in the event of a kernel panic. Furthermore, data is
striped across vdevs, so the more vdevs you have, the higher your
performance goes.

These features enable ZFS performance to go to impressive heights and
the btrfs developers display no intention of following it as far as I
have seen.

> Beyond the licensing issues ZFS also does not support reshaping of
> raid-z, which is the only n+1 redundancy solution it offers.  Btrfs of
> course does not yet support n+1 at all aside from some experimental
> patches floating around, but it plans to support reshaping at some
> point in time.  Of course, there is no reason you couldn't implement
> reshaping for ZFS, it just hasn't happened yet.  Right now the
> competition for me is with ext4+lvm+mdraid.  While I really would like
> to have COW soon, I doubt I'll implement anything that doesn't support
> reshaping as mdraid+lvm does.

raidz has 3 varieties, which are single parity, double parity and
triple parity. As for reshaping, ZFS is a logical volume manager. You
can set and resize limits on ZFS datasets as you please.

As for competiting with ext4+lvm+mdraid, I recently migrated a server
from that exact configuration. It had 6 disks, using RAID 6. I had a
VM on it running Gentoo Hardened in which I did a benchmark using dd
to write zeroes to the disk. Nothing I could do with ext4+lvm+mdraid
could get performance above 20MB/sec. After switching to ZFS,
performance went to 205MB/sec. The worst performance I observed was
92MB/sec. This used 6 Samsung HD204UI hard drives.

> I do realize that you can add multiple raid-zs to a zpool, but that
> isn't quite enough.  If I have 4x1TB disks I'd like to be able to add
> a single 1TB disk and end up with 5TB of space.  I'd rather not have
> to find 3 more 1TB hard drives to hold the data on while I redo my
> raid and then try to somehow sell them again.

You would probably be better served by making your additional drive
into a hotspare, but if you insist on using it, you can make it a
separate vdev, which should provide more space. To be honest, anyone
who wants to upgrade such a configuration probably is better off
getting 4x2TB disks, do a scrub and then start replacing disks in the
pool, iterating between replacing a disk and resilvering the vdev.
After you have finished this process, you will have doubled the amount
of space in the pool.

Reply via email to