Martin Steigerwald posted on Fri, 11 May 2012 18:58:05 +0200 as excerpted:

Martin Steigerwald posted on Fri, 11 May 2012 18:58:05 +0200 as excerpted:

> Am Freitag, 11. Mai 2012 schrieb Duncan:
>> Daniel Pocock posted on Wed, 09 May 2012 22:01:49 +0000 as excerpted:
>> > There is various information about - enterprise-class drives

>> This isn't a direct answer to that, but expressing a bit of concern
>> over  the implications of your question, that you're planning on using
>> btrfs in an enterprise class installation.

>> [In] mainline Linux kernel terms, btrfs remains very much an
>> experimental filesystem

>> On an experimental filesystem under as intense continued development as
>> btrfs, by contrast, it's best to consider your btrfs copy an extra
>> "throwaway" copy only intended for testing.  You still have your
>> primary copy, along with all the usual backups, on something less
>> experimental, since you never know when/where/ how your btrfs testing
>> will screw up its copy.
> 
> Duncan, did you actually test BTRFS? Theory can´t replace real life
> experience.

I /had/ been waiting until the n-way-mirrored-raid1 roadmapped for after
raid5/6 mode (which should hit 3.5, I believe), but hardware issues
intervened and I'm no longer using those older 4-way md/raid drives as
primary.

And now that I have it, present personal experience does not contradict
what I posted.  btrfs does indeed work reasonably well under reasonably
good, non-stressful, conditions.  But my experience so far aligns quite
well with the "consider the btrfs copy a throw-away copy, just in case"
recommendation.  Just because it's a throw-away copy doesn't mean you'll
have to have to resort to the "good" copy elsewhere, but it DOES hopefully
mean that you'll have both a "good" copy elsewhere, and a backup for that
supposedly good copy, just in case btrfs does go bad,
and that supposedly good primary copy, ends up not being good after all.

> From all of my personal BTRFS installations not one has gone corrupt -
> and I have at least four, while more of them are in use at my employer.
> Except maybe a scratch data BRTFS RAID 0 over lots of SATA disks. But
> maybe it would have been fixable by btrfs-zero-log which I didn´t know
> of back then. Another one needed a btrfs-zero-log, but that was quite
> some time ago.
> 
> Some of the installations are in use for more than a year AFAIR.
> 
> While I would still be reluctant with deploying BTRFS for a customer for
> critical data

This was actually my point in this thread.  If someone's asking questions
about enterprise quality hardware, they're not likely to run into some of
the bugs I've been having recently that have been exposed by hardware
issues.  However, they're also far more likely to be considering btrfs for
a row-of-nines uptime application, which is, after all, where some of
btrfs' features are normally found.  Regardless of whether btrfs is past
the "throw away data experimental class" stage or not, I think we both
agree it isn't ready for row-of-nines-uptime applications just yet.  If
he's just testing btrfs on such equipment for possible future
row-of-nines-uptime deployment a year or possibly two out, great.  If he's
looking at such a deployment two-months-out, no way, and it looks like you
agree.

-- 
Duncan - List replies preferred.   No HTML msgs.
"Every nonfree program has a lord, a master --
and if you use the program, he is your master."  Richard Stallman

--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to