On Sun, Sep 19, 2010 at 01:55:34AM +0200, Roy Sigurd Karlsbakk wrote:
> ----- Original Message -----
> > On Sat, Sep 18, 2010 at 11:37 PM, Roy Sigurd Karlsbakk
> > <r...@karlsbakk.net> wrote:
> > > Hi all
> > >
> > > I've been on this list for a year or so, and I have been following
> > > progress for some more. Are there any chances of btrfs >stabilizing,
> > > as in terms of usability in production? If so, how far are we from
> > > this?
> > Hi,
> > 
> > I am using btrfs as my root filesystem on my Debian squeeze machine
> > for a few month now and so far I haven't experienced any problems.
> > It seems quite stable for me. I am not using raid functions, but am
> > also very interested in the progress in raid5/6.
> 
> I was more interested in large setups than a general install.
> 
> Question remains, when is btrfs supposed to be stable, as in usable for large 
> server setups?

   As has been pointed out by Anthony, there's no means of determining
when something is "stable" -- not just for filesystems, but for any
piece of software. All you can do is take a Bayesian approach: sum up
the number (and type) of failures, and compare it to the number of
user-hours that the software has been in use for, across all
installations. When that failure rate (and recovery rate) reaches the
point at which you're happy to use it in your situation -- whether
that's on your bleeding-edge desktop test box, or for running your
robotic heart surgeon -- you can call it stable. However, that point
has to be your decision for your particular use case.

   If you're now thinking, "but where do I get that information
from?", congratulations -- you now know nearly as much about the user
base as the btrfs developers. :) Your best bet is to keep an eye on
this mailing list, and take a look at the number and type of reported
failures. When that drops to the point that you feel safe, go ahead
and use it.

   An alternative approach is to install a btrfs set-up on your
internal development or test machines (you *do* have a test
infrastructure for your mission critical systems, right?), and hammer
it with the closest you can get to a real workload, and see what
happens. Again, this is a statistical approach. It's the best we've
got.

   At some point, we(*) hope, btrfs will have millions upon millions
of users, doing all kinds of bad things to it, and tiny fractions of
them will have problems. When that happens, someone will probably
start calling it stable, and the name will stick. Until then, many
people are happy with it for their uses, but nobody can (or will)
magically stick a label on a piece of code of this complexity and say
"it's stable now!"

   Hugo.

(*) Speaking as an interested nobody, rather than a developer.

-- 
=== Hugo Mills: h...@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
                  --- Be pure. Be vigilant. Behave. ---                  

Attachment: signature.asc
Description: Digital signature

Reply via email to