On 12/12/13, Chris Mason wrote:
> For me anyway, data=dup in mixed mode is definitely an accident ;)
> I personally think data dup is a false sense of security, but drives
> have gotten so huge that it may actually make sense in a few
> configurations.
Sure, it's not about any security regarding
David Sterba posted on Thu, 12 Dec 2013 18:58:16 +0100 as excerpted:
> I've been testing --mixed mode with various other raid profiles types as
> far as I remember. Some bugs popped up, reported and Josef fixed them.
FWIW, I'm running a mixed-mode btrfs-raid1 mode here on my log partition
and ha
On Thu, Dec 12, 2013 at 10:57:33AM -0500, Chris Mason wrote:
> Quoting Duncan (2013-12-11 13:27:53)
> > Imran Geriskovan posted on Wed, 11 Dec 2013 15:19:29 +0200 as excerpted:
> >
> > > Now, there is one open issue:
> > > In its current form "-d dup" interferes with "-M". Is it constraint of
> >
Quoting Duncan (2013-12-11 13:27:53)
> Imran Geriskovan posted on Wed, 11 Dec 2013 15:19:29 +0200 as excerpted:
>
> > Now, there is one open issue:
> > In its current form "-d dup" interferes with "-M". Is it constraint of
> > design?
> > Or an arbitrary/temporary constraint. What will be the situ
Imran Geriskovan posted on Wed, 11 Dec 2013 15:19:29 +0200 as excerpted:
> Now, there is one open issue:
> In its current form "-d dup" interferes with "-M". Is it constraint of
> design?
> Or an arbitrary/temporary constraint. What will be the situation if
> there is tunable duplicates?
I believ
Hugo Mills posted on Wed, 11 Dec 2013 08:09:02 + as excerpted:
> On Tue, Dec 10, 2013 at 09:07:21PM -0700, Chris Murphy wrote:
>>
>> On Dec 10, 2013, at 8:19 PM, Imran Geriskovan
>> wrote:
>> >
>> > Now the question is, is it a good practice to use "-M" for large
>> > filesystems?
>>
>> Un
On Dec 11, 2013, at 1:09 AM, Hugo Mills wrote:
> That documentation needs tweaking. You need --mixed/-M for larger
> filesystems than that. It's hard to say exactly where the optimal
> boundary is, but somewhere around 16 GiB seems to be the dividing
> point (8 GiB is in the "mostly going to c
>> What's more (in relation to our long term data integrity aim)
>> order of magnitude for their unpowered data retension period is
>> 1 YEAR. (Read it as 6months to 2-3 years.
> Does btrfs need to date-stamp each block/chunk to ensure that data is
> rewritten before suffering flash memory bitrot?
On 11/12/13 03:19, Imran Geriskovan wrote:
SSDs:
> What's more (in relation to our long term data integrity aim)
> order of magnitude for their unpowered data retension period is
> 1 YEAR. (Read it as 6months to 2-3 years. While powered they
> refresh/shuffle the blocks) This makes SSDs
> unsuita
> That's actually the reason btrfs defaults to SINGLE metadata mode on
> single-device SSD-backed filesystems, as well.
>
> But as Imran points out, SSDs aren't all there is. There's still
> spinning rust around.
>
> And defaults aside, even on SSDs it should be /possible/ to specify data-
> dup m
Chris Murphy posted on Tue, 10 Dec 2013 17:33:59 -0700 as excerpted:
> On Dec 10, 2013, at 5:14 PM, Imran Geriskovan
> wrote:
>
>>> Current btrfs-progs is v3.12. 0.19 is a bit old. But yes, looks like
>>> the wiki also needs updating.
>>
>>> Anyway I just tried it on an 8GB stick and it works,
On Tue, Dec 10, 2013 at 09:07:21PM -0700, Chris Murphy wrote:
>
> On Dec 10, 2013, at 8:19 PM, Imran Geriskovan
> wrote:
> >
> > Now the question is, is it a good practice to use "-M" for large
> > filesystems?
> > Pros, Cons? What is the performance impact? Or any other possible impact?
>
>
Chris Murphy posted on Tue, 10 Dec 2013 17:33:59 -0700 as excerpted:
> On Dec 10, 2013, at 5:14 PM, Imran Geriskovan
> wrote:
>
>> As being the lead developer, is it possible for you to
>> provide some insights for the reliability of this option?
>
> I'm not a developer, I'm just an ape who wea
On Dec 10, 2013, at 8:19 PM, Imran Geriskovan
wrote:
>
> Now the question is, is it a good practice to use "-M" for large filesystems?
> Pros, Cons? What is the performance impact? Or any other possible impact?
Uncertain. man mkfs.btrfs says "Mix data and metadata chunks together for more
eff
> I'm not a developer, I'm just an ape who wears pants. Chris Mason is the
> lead developer. All I can say about it is that it's been working for me OK
> so far.
Great:) Now, I understand that you were using "-d dup", which is quite
valuable for me. And since GMail only show first names in Inbox l
On Dec 10, 2013, at 5:14 PM, Imran Geriskovan
wrote:
>> Current btrfs-progs is v3.12. 0.19 is a bit old. But yes, looks like the
>> wiki also needs updating.
>
>> Anyway I just tried it on an 8GB stick and it works, but -M (mixed
>> data+metadata) is required, which documentation also says inc
-- Forwarded message --
From: Imran Geriskovan
Date: Wed, 11 Dec 2013 02:14:25 +0200
Subject: Re: Feature Req: "mkfs.btrfs -d dup" option on single device
To: Chris Murphy
> Current btrfs-progs is v3.12. 0.19 is a bit old. But yes, looks like the
> wiki als
On Dec 10, 2013, at 4:33 PM, Imran Geriskovan
wrote:
>>> Currently, if you want to protect your data against bit-rot on
>>> a single device you must have 2 btrfs partitions and mount
>>> them as Raid1.
>
>> No this also works:
>> mkfs.btrfs -d dup -m dup -M
>
> Thanks a lot.
>
> I guess doc
>> Currently, if you want to protect your data against bit-rot on
>> a single device you must have 2 btrfs partitions and mount
>> them as Raid1.
> No this also works:
> mkfs.btrfs -d dup -m dup -M
Thanks a lot.
I guess docs need an update:
https://btrfs.wiki.kernel.org/index.php/Mkfs.btrfs:
"
On Dec 10, 2013, at 1:31 PM, Imran Geriskovan
wrote:
> Currently, if you want to protect your data against bit-rot on
> a single device you must have 2 btrfs partitions and mount
> them as Raid1.
No this also works:
mkfs.btrfs -d dup -m dup -M
Chris Murphy
--
To unsubscribe from this list
Currently, if you want to protect your data against bit-rot on
a single device you must have 2 btrfs partitions and mount
them as Raid1. The requested option will save the user from
partitioning and will provide flexibility.
Yes, I know: This will not provide any safety againts hardware
failure. B
21 matches
Mail list logo