On Mon, Sep 20, 2010 at 02:21:15PM +0200, Stephan von Krawczynski wrote: > On Mon, 20 Sep 2010 07:30:57 -0400 > Chris Mason <chris.ma...@oracle.com> wrote: > > > On Mon, Sep 20, 2010 at 11:00:08AM +0000, Lubos Kolouch wrote: > > > No, not stable! > > > > > > Again, after powerloss, I have *two* damaged btrfs filesystems. > > > > Please tell me more about your system. I do extensive power fail > > testing here without problems, and corruptions after powerloss are very > > often caused by the actual hardware. > > > > So, what kind of drives do you have, do they have writeback caching on, > > and what are you layering on top of the drive between btrfs and the > > kernel? > > > > -chris > > Chris, the actual way how a fs was damaged must not be relevant. From a new fs > design one should expect the tree can be mounted no matter what corruption > took > place up to the case where the fs is indeed empty after mounting because it > was completely corrupted. If parts were corrupt then the fs should either be > able to assist the user in correcting the damages _online_ or at least simply > exclude the damaged fs parts from the actual mounted fs tree. The basic > thought must be "show me what you have" and not "shit, how do I get access to > the working but not mountable fs parts again?". > Would you buy a car that refuses to drive if the ash tray is broken?
Definitely, this has always been one of our goals. Step one is a better btrfsck, which is in progress now. Step two is being able to do more of this online. -chris -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html