On Sat, Sep 16, 2017 at 9:43 AM, Kai Krakow <hurikha...@gmail.com> wrote:
>
> Actually, I'm running across 3x 1TB here on my desktop, with mraid1 and
> draid 0. Combined with bcache it gives confident performance.
>

Not entirely sure I'd use the word "confident" to describe a
filesystem where the loss of one disk guarantees that:
1.  You will lose data (no data redundancy).
2.  But the filesystem will be able to tell you exactly what data you
lost (as metadata will be fine).

>
> I was very happy a long time with XFS but switched to btrfs when it
> became usable due to compression and stuff. But performance of
> compression seems to get worse lately, IO performance drops due to
> hogged CPUs even if my system really isn't that incapable.
>

Btrfs performance is pretty bad in general right now.  The problem is
that they just simply haven't gotten around to optimizing it fully,
mainly because they're more focused on getting rid of the data
corruption bugs (which is of course the right priority).  For example,
with raid1 mode btrfs picks the disk to use for raid based on whether
the PID is even or odd, without any regard to disk utilization.

When I moved to zfs I noticed a huge performance boost.

Fundamentally I don't see why btrfs can't perform just as well as the
others.  It just isn't there yet.

> What's still cool is that I don't need to manage volumes since the
> volume manager is built into btrfs. XFS on LVM was not that flexible.
> If btrfs wouldn't have this feature, I probably would have switched
> back to XFS already.

My main concern with xfs/ext4 is that neither provides on-disk
checksums or protection against the raid write hole.

I just switched motherboards a few weeks ago and either a connection
or a SATA port was bad because one of my drives was getting a TON of
checksum errors on zfs.  I moved it to an LSI card and scrubbed, and
while it took forever and the system degraded the array more than once
due to the high error rate, eventually it patched up all the errors
and now the array is working without issue.  I didn't suffer more than
a bit of inconvenience but with even mdadm raid1 I'd have had a HUGE
headache trying to recover from that (doing who knows how much
troubleshooting before realizing I had to do a slow full restore from
backup with the system down).

I just don't see how a modern filesystem can get away without having
full checksum support.  It is a bit odd that it has taken so long for
Ceph to introduce it, and I'm still not sure if it is truly
end-to-end, or if at any point in its life the data isn't protected by
checksums.  If I were designing something like Ceph I'd checksum the
data at the client the moment it enters storage, then independently
store the checksum and data, and then retrieve both and check it at
the client when the data leaves storage.  Then you're protected
against corruption at any layer below that.  You could of course have
additional protections to catch errors sooner before the client even
sees them.  I think that the issue is that Ceph was really designed
for object storage originally and they just figured the application
would be responsible for data integrity.

The other benefit of checksums is that if they're done right scrubs
can go a lot faster, because you don't have to scrub all the
redundancy data synchronously.  You can just start an idle-priority
read thread on every drive and then pause it anytime a drive is
accessed, and an access on one drive won't slow down the others.  With
traditional RAID you have to read all the redundancy data
synchronously because you can't check the integrity of any of it
without the full set.  I think even ZFS is stuck doing synchronous
reads due to how it stores/computes the checksums.  This is something
btrfs got right.

>
>>  For the moment I'm
>> relying more on zfs.
>
> How does it perform memory-wise? Especially, I'm currently using bees[1]
> for deduplication: It uses a 1G memory mapped file (you can choose
> other sizes if you want), and it picks up new files really fast, within
> a minute. I don't think zfs can do anything like that within the same
> resources.

I'm not using deduplication, but my understanding is that zfs deduplication:
1.  Works just fine.
2.  Uses a TON of RAM.

So, it might not be your cup of tea.  There is no way to do
semi-offline dedup as with btrfs (not really offline in that the
filesystem is fully running - just that you periodically scan for dups
and fix them after the fact, vs detect them in realtime).    With a
semi-offline mode then the performance hits would only come at a time
of my choosing, vs using gobs of RAM all the time to detect what are
probably fairly rare dups.

That aside, I find it works fine memory-wise (I don't use dedup).  It
has its own cache system not integrated fully into the kernel's native
cache, so it tends to hold on to a lot more ram than other
filesystems, but you can tune this behavior so that it stays fairly
tame.

-- 
Rich

Reply via email to