Hi Rick,

On Tue, Dec 26, 2017 at 11:37:32AM -0800, Rick Thomas wrote:
> Is btrfs mature enough to use in enterprise applications?

Not in my opinion. I've dabbled with it at home and based on those
experiences I will not be using it professionally any time soon.

> If you are using it, I’d like to hear from you about your experiences — good 
> or bad.

During the time of Debian wheezy I made use of btrfs for my home
fileserver, which is an HP Microserver with 4x 3.5" SATA drives and
an 8 bay disk chassis with 6 more 3.5" SATA HDDs in it, connected by
eSATA. It had previously been using LVM on top of Linux MD without
issue, but I'd become mindful of the amount of storage that was
being used without consistency checks (except for a weekly MD
scrub).

In order to do this I required a backports kernel and btrfs-tools
from git. I went for one of the more simple btrfs configurations
which is a raid1.

Over the next few years I didn't lose any data, but I did
experience:

- Out of space errors even when there was plenty of space

- Filesystems that went read-only on a device failure even though
  there was enough redundancy

- Filesystems that couldn't be remounted read-write after failed
  device replacement, even though the hardware is hot-swap, due to
  bugs in btrfs which required a kernel upgrade to fix (therefore a
  reboot, despite having the redundancy otherwise).

In summary, btrfs lowered the availability of the system due to
being buggy even in a relatively unexciting configuration. I could
not recommend it for serious use yet.

I am sure there will be plenty of people who've used it for years
without experiencing any issue. I am still subscribed to the btrfs
mailing list though, and unfortunately I still see people on there
reporting serious issues including data loss.

> My proposed application is for a small community radio station
> music library. We currently have about 5TB of data in a RAID10
> using four 3TB drives, with ext4 over the RAID.  So we’re about
> 75% full, growing at the rate of about 1TB/year, so we’ll run out
> of space by the end of 2018.

A year to solve this problem is nice to have, though. :)

> I’m proposing to go to three 6TB drives in a btrfs/RAID5 configuration.

If I were you I'd think very very carefully before using RAID-5 for
anything, btrfs or not.

- Four spindles to three means reduced performance.

- RAID-5 means parity means reduced performance.

- Lose one device and you're operating on just two spindles and
  recalculating parity. It will run like a dog and any further error
  means data loss, and array failure, which can be stressful and
  nerve-wracking to repair.

- I wouldn't like to chance my arm finding previously-unknown bad
  areas on two 6TB devices. When you have a three device RAID-5 and
  one device dies, you REQUIRE every sector on both the other
  devices to be readable in order to reconstruct the data onto the
  new device.

Personally I would not risk 3x6TB in any kind of RAID-5. HDDs are
pretty cheap so I really have to question the wisdom of cutting down
the number of spindles this far, and then using RAID-5.

RAID-10: okay, it "wastes" the most capacity in exchange for better
write performance. If this is a media library I get that maybe you
don't need the write performance (have you benchmarked your current
system??). RAID-10 is what you use when you can afford it, but if
you can't then compromises have to be made. In that case consider 4
or more spindles RAID-6. At least you stand a chance of being able
to replace a failed device before coming across a bad area on
another device.

Spread the extra cost of whatever you need to make it to four
devices in RAID-6 across the expected lifetime of your system (~6
years?) and does it still seem too much to pay?

Finally, the parity RAID levels in btrfs are newer than RAID-1 and
-10 and have seen a lot more bugs. Including really bad data loss
bugs.

One look at https://btrfs.wiki.kernel.org/index.php/RAID56 should be
enough.

> Would I be safer with ext4 over RAID5?

It's a bit of a frying pan / ground zero nuclear blast situation,
really.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting

"I remember the first time I made love.  Perhaps it was not love exactly but I
 made it and it still works." — The League Against Tedium

Reply via email to