Le 2015-09-17 08:29, Stéphane Lesimple a écrit :
Le 2015-09-16 15:04, Stéphane Lesimple a écrit :
I also disabled quota because it has almost for sure nothing
to do with the bug
As it turns out, it seems that this assertion was completely wrong.
I've got balance running for more than 16 hours
Le 2015-09-17 08:42, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/17 08:11 +0200:
Le 2015-09-17 05:03, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/16 22:41 +0200:
Le 2015-09-16 22:18, Duncan a écrit :
Stéphane Lesimple posted on Wed, 16 Sep 2015 15:04:20 +0200 as
excerpte
Hi Qu,
On 09/17/2015 09:48 AM, Qu Wenruo wrote:
To Anand Jain,
Any feedback on this method to allow single chunk still be degraded
mountable?
It should be much better than allowing degraded mount for any missing
device case.
yeah. this changes the way missing devices are counted and its mo
Stéphane Lesimple wrote on 2015/09/17 10:02 +0200:
Le 2015-09-17 08:42, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/17 08:11 +0200:
Le 2015-09-17 05:03, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/16 22:41 +0200:
Le 2015-09-16 22:18, Duncan a écrit :
Stéphane Lesimple
Stéphane Lesimple wrote on 2015/09/17 08:11 +0200:
Le 2015-09-17 05:03, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/16 22:41 +0200:
Le 2015-09-16 22:18, Duncan a écrit :
Stéphane Lesimple posted on Wed, 16 Sep 2015 15:04:20 +0200 as
excerpted:
Well actually it's the (d) option
On 09/16/2015 11:43 AM, Qu Wenruo wrote:
As we do per-chunk missing device number check at read_one_chunk() time,
it's not needed to do global missing device number check.
Just remove it.
However the missing device count, what we have during the remount is not
fine grained per chunk.
-
Thanks for pointing this out.
Although previous patch is small enough, but for remount case, we need
to iterate all the existing chunk cache.
So fix for remount will take a little more time.
Thanks for reviewing.
Qu
在 2015年09月17日 17:43, Anand Jain 写道:
On 09/16/2015 11:43 AM, Qu Wenruo wro
Le 2015-09-17 10:11, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/17 10:02 +0200:
Le 2015-09-17 08:42, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/17 08:11 +0200:
Le 2015-09-17 05:03, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/16 22:41 +0200:
Le 2015-09-16 22
在 2015年09月17日 18:08, Stéphane Lesimple 写道:
Le 2015-09-17 10:11, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/17 10:02 +0200:
Le 2015-09-17 08:42, Qu Wenruo a écrit :
Stéphane Lesimple wrote on 2015/09/17 08:11 +0200:
Le 2015-09-17 05:03, Qu Wenruo a écrit :
Stéphane Lesimple wrot
Hello guys
I think I might found a bug, Lots of text, I dont know what you want
from me and not, so I try to get almost everything in one mail, please
dont shoot me! :)
To make a long store somewhat short, this is about what happend to me;
(skip to if you dont care about history)
Arch-linux
On 2015-09-16 19:31, Hugo Mills wrote:
On Wed, Sep 16, 2015 at 03:21:26PM -0400, Austin S Hemmelgarn wrote:
On 2015-09-16 12:45, Martin Tippmann wrote:
2015-09-16 17:20 GMT+02:00 Austin S Hemmelgarn :
[...]
[...]
From reading the list I understand that btrfs is still very much work
in progre
On 16 September 2015 at 20:21, Austin S Hemmelgarn wrote:
> ZFS has been around for much longer, it's been mature and feature complete
> for more than a decade, and has had a long time to improve performance wise.
> It is important to note though, that on low-end hardware, BTRFS can (and
> oft
Thanks for the report.
There is a bug that raid1 with one disk missing and trying to mount
for the 2nd time.. it would fail. I am not too sure if in the boot
process would there be mount and then remount/mount again ? If yes then
it is potentially hitting the problem as in the patch below.
On Wed, Sep 16, 2015 at 5:56 PM, erp...@gmail.com wrote:
> What I expected to happen:
> I expected that the system would either start as if nothing were
> wrong, or would warn me that one half of the mirror was missing and
> ask if I really wanted to start the system with the root array in a
> de
On Thu, Sep 17, 2015 at 9:18 AM, Anand Jain wrote:
>
> as of now it would/should start normally only when there is an entry -o
> degraded
>
> it looks like -o degraded is going to be a very obvious feature,
> I have plans of making it a default feature, and provide -o
> nodegraded feature inst
Am Mittwoch, 16. September 2015, 23:29:30 CEST schrieb Hugo Mills:
> > but even then having write-barriers
> > turned off is still not as safe as having them turned on. Most of
> > the time when I've tried testing with 'nobarrier' (not just on BTRFS
> > but on ext* as well), I had just as many iss
Hi Anand,
On 2015-09-17 17:18, Anand Jain wrote:
> it looks like -o degraded is going to be a very obvious feature,
> I have plans of making it a default feature, and provide -o
> nodegraded feature instead. Thanks for comments if any.
>
> Thanks, Anand
I am not sure if there is a "good" def
Hi,
thank you for your answers!
So it seems there are several suboptimal alternatives here...
MD+LVM is very close to what I want, but md has no way to cope with
silent data corruption. So if I'd want to use a guest filesystem that
has no checksums either, I'm out of luck.
I'm honestly a bit
On Thu, Sep 17, 2015 at 11:56 AM, Gert Menke wrote:
> Hi,
>
> thank you for your answers!
>
> So it seems there are several suboptimal alternatives here...
>
> MD+LVM is very close to what I want, but md has no way to cope with silent
> data corruption. So if I'd want to use a guest filesystem tha
On 17 September 2015 at 18:56, Gert Menke wrote:
> MD+LVM is very close to what I want, but md has no way to cope with silent
> data corruption. So if I'd want to use a guest filesystem that has no
> checksums either, I'm out of luck.
> I'm honestly a bit confused here - isn't checksumming one of
Le 2015-09-17 12:41, Qu Wenruo a écrit :
In the meantime, I've reactivated quotas, umounted the filesystem and
ran a btrfsck on it : as you would expect, there's no qgroup problem
reported so far.
At least, rescan code is working without problem.
I'll clear all my snapshots, run an quota resc
On Thu, 17 Sep 2015 19:00:08 +0200
Goffredo Baroncelli wrote:
> On 2015-09-17 17:18, Anand Jain wrote:
> > it looks like -o degraded is going to be a very obvious feature,
> > I have plans of making it a default feature, and provide -o
> > nodegraded feature instead. Thanks for comments if any
On Thu, Sep 17, 2015 at 07:56:08PM +0200, Gert Menke wrote:
> Hi,
>
> thank you for your answers!
>
> So it seems there are several suboptimal alternatives here...
>
> MD+LVM is very close to what I want, but md has no way to cope with
> silent data corruption. So if I'd want to use a guest file
On Thu, Sep 17, 2015 at 1:02 PM, Roman Mamedov wrote:
> On Thu, 17 Sep 2015 19:00:08 +0200
> Goffredo Baroncelli wrote:
>
>> On 2015-09-17 17:18, Anand Jain wrote:
>> > it looks like -o degraded is going to be a very obvious feature,
>> > I have plans of making it a default feature, and provide
On 17.09.2015 at 20:35, Chris Murphy wrote:
You can use Btrfs in the guest to get at least notification of SDC.
Yes, but I'd rather not depend on all potential guest OSes having btrfs
or something similar.
Another way is to put a conventional fs image on e.g. GlusterFS with
checksumming enabl
On 17.09.2015 at 21:43, Hugo Mills wrote:
On Thu, Sep 17, 2015 at 07:56:08PM +0200, Gert Menke wrote:
BTRFS looks really nice feature-wise, but is not (yet) optimized for
my use-case I guess. Disabling COW would certainly help, but I don't
want to lose the data checksums. Is nodatacowbutkeepdata
Посмотреть цены и ассортимент rusrusrus.ru
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On Thu, Sep 17, 2015 at 07:56:08PM +0200, Gert Menke wrote:
> MD+LVM is very close to what I want, but md has no way to cope with silent
> data corruption. So if I'd want to use a guest filesystem that has no
> checksums either, I'm out of luck.
> I'm honestly a bit confused here - isn't checksummi
Zygo Blaxell posted on Wed, 16 Sep 2015 18:08:56 -0400 as excerpted:
> On Wed, Sep 16, 2015 at 03:04:38PM -0400, Vincent Olivier wrote:
>>
>> OK fine. Let it be clearer then (on the Btrfs wiki): nobarrier is an
>> absolute no go. Case closed.
>
> Sometimes it is useful to make an ephemeral files
Stéphane Lesimple wrote on 2015/09/17 20:47 +0200:
Le 2015-09-17 12:41, Qu Wenruo a écrit :
In the meantime, I've reactivated quotas, umounted the filesystem and
ran a btrfsck on it : as you would expect, there's no qgroup problem
reported so far.
At least, rescan code is working without pro
Anand Jain posted on Thu, 17 Sep 2015 23:18:36 +0800 as excerpted:
>> What I expected to happen:
>> I expected that the [btrfs raid1 data/metadata] system would either
>> start as if nothing were wrong, or would warn me that one half of the
>> mirror was missing and ask if I really wanted to start
On 09/17/2015 06:01 PM, Qu Wenruo wrote:
Thanks for pointing this out.
Although previous patch is small enough, but for remount case, we need
to iterate all the existing chunk cache.
yes indeed.
thinking hard on this - is there any test-case that these two patches
are solving, which t
Chris Murphy posted on Thu, 17 Sep 2015 12:35:41 -0600 as excerpted:
> You'd use Btrfs snapshots to create a subvolume for doing backups of
> the images, and then get rid of the Btrfs snapshot.
The caveat here is that if the VM/DB is active during the backups (btrfs
send/receive or other), it'll
Anand Jain wrote on 2015/09/18 09:47 +0800:
On 09/17/2015 06:01 PM, Qu Wenruo wrote:
Thanks for pointing this out.
Although previous patch is small enough, but for remount case, we need
to iterate all the existing chunk cache.
yes indeed.
thinking hard on this - is there any test-
Hugo Mills posted on Thu, 17 Sep 2015 19:43:14 + as excerpted:
>> Is nodatacowbutkeepdatachecksums a feature that might turn up
>> in the future?
>
> No. If you try doing that particular combination of features, you
> end up with a filesystem that can be inconsistent: there's a race
> conditi
I think you have stated that in a very polite and friendly way. I'm
pretty sure I'd phrase it less politely :)
Following mdadm's example of an easy option to allow degraded
mounting, but that shouldn't be the default. Anyone with the expertise
to set that option can be expected to implement a way
Hi Qu,
Thanks for the comments on patch [1].
For example, if one use single metadata for 2 disks,
> and each disk has one metadata chunk on it.
how that can be achieved ?
One device got missing later.
it would surely depend on which one of the device ? (initial only
devid 1 mountable
37 matches
Mail list logo