On May 3, 2014, at 1:09 PM, Chris Murphy wrote:
>
> On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn wrote:
>
>> On 05/02/2014 03:21 PM, Chris Murphy wrote:
>>>
>>> Btrfs raid1 with 3+ devices is unique as far as I can tell. It is
>>> something like raid1 (2 copies) + linear/concat. But that
On 05/03/2014 03:09 PM, Chris Murphy wrote:
>
> On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn wrote:
>
>> On 05/02/2014 03:21 PM, Chris Murphy wrote:
>>>
>>> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
Something tells me btrfs replace (not device replace, simply
On May 3, 2014, at 10:31 AM, Austin S Hemmelgarn wrote:
> On 05/02/2014 03:21 PM, Chris Murphy wrote:
>>
>> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>>>
>>> Something tells me btrfs replace (not device replace, simply
>>> replace) should be moved to btrfs device replace
On 05/02/2014 03:21 PM, Chris Murphy wrote:
>
> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>>
>> Something tells me btrfs replace (not device replace, simply
>> replace) should be moved to btrfs device replace…
>
> The syntax for "btrfs device" is different though; replace
On May 2, 2014, at 3:08 PM, Hugo Mills wrote:
> On Fri, May 02, 2014 at 01:21:50PM -0600, Chris Murphy wrote:
>>
>> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>>>
>>> Something tells me btrfs replace (not device replace, simply replace)
>>> should be moved to btrfs devic
On Fri, May 02, 2014 at 01:21:50PM -0600, Chris Murphy wrote:
>
> On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
> >
> > Something tells me btrfs replace (not device replace, simply replace)
> > should be moved to btrfs device replace…
>
> The syntax for "btrfs device" is diff
On May 2, 2014, at 2:23 AM, Duncan <1i5t5.dun...@cox.net> wrote:
>
> Something tells me btrfs replace (not device replace, simply replace)
> should be moved to btrfs device replace…
The syntax for "btrfs device" is different though; replace is like balance:
btrfs balance start and btrfs replac
On 02/05/14 10:23, Duncan wrote:
Russell Coker posted on Fri, 02 May 2014 11:48:07 +1000 as excerpted:
On Thu, 1 May 2014, Duncan <1i5t5.dun...@cox.net> wrote:
[snip]
http://www.eecs.berkeley.edu/Pubs/TechRpts/1987/CSD-87-391.pdf
Whether a true RAID-1 means just 2 copies or N copies is a matt
Russell Coker posted on Fri, 02 May 2014 11:48:07 +1000 as excerpted:
> On Thu, 1 May 2014, Duncan <1i5t5.dun...@cox.net> wrote:
>
> Am I missing something or is it impossible to do a disk replace on BTRFS
> right now?
>
> I can delete a device, I can add a device, but I'd like to replace a
> de
On Thu, 1 May 2014, Duncan <1i5t5.dun...@cox.net> wrote:
> That's why I'm running raid1 for both data and metadata here. I love
> btrfs' data/metadata checksumming and integrity mechanisms, and having
> that second copy to scrub from in the event of an error on one of them is
> just as important t
Russell Coker posted on Thu, 01 May 2014 11:52:33 +1000 as excerpted:
> I've just been doing some experiments with a failing disk used for
> backups (so I'm not losing any real data here).
=:^)
> The "dup" option for metadata means that the entire filesystem
> structure is intact in spite of hav
On Fri, 28 Feb 2014 10:34:36 Roman Mamedov wrote:
> > I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
>
> Do you sleep well at night knowing that if one disk fails, you end up with
> basically a RAID0 of 7x3TB disks? And that if 2nd one encounters unreadable
> sector during re
Absolutely. I'd like to know the answer to this, as 13 tera will take
a considerable amount of time to back up anywhere, assuming I find a
place. I'm considering rebuilding a smaller raid with newer drives
(it was originally built using 16 250 gig western digital drives, it's
about eleven years o
Apologies for the late reply, I'd assumed the issue was closed even
given the unusual behavior. My mount options are:
/dev/sdb1 on /var/lib/nobody/fs/ubfterra type btrfs
(rw,noatime,nodatasum,nodatacow,noacl,space_cache,skip_balance)
I only recently added nodatacow and skip_balance in an attempt
On Fri, 28 Feb 2014 07:27:06 + (UTC)
Duncan <1i5t5.dun...@cox.net> wrote:
> Based on what I've read on-list, btrfs is not arch-agnostic, with certain
> on-disk sizes set to native kernel page size, etc, so a filesystem
> created on one arch may well not work on another.
>
> Question: Does t
Roman Mamedov posted on Fri, 28 Feb 2014 10:34:36 +0600 as excerpted:
> But then as others mentioned it may be risky to use this FS on 32-bit at
> all, so I'd suggest trying anything else only after you reboot into a
> 64-bit kernel.
Based on what I've read on-list, btrfs is not arch-agnostic, wi
On Feb 27, 2014, at 11:13 PM, Chris Murphy wrote:
>
> On Feb 27, 2014, at 11:19 AM, Justin Brown wrote:
>
>> terra:/var/lib/nobody/fs/ubfterra # btrfs fi df .
>> Data, single: total=17.58TiB, used=17.57TiB
>> System, DUP: total=8.00MiB, used=1.93MiB
>> System, single: total=4.00MiB, used=0.00
On Feb 27, 2014, at 11:19 AM, Justin Brown wrote:
> terra:/var/lib/nobody/fs/ubfterra # btrfs fi df .
> Data, single: total=17.58TiB, used=17.57TiB
> System, DUP: total=8.00MiB, used=1.93MiB
> System, single: total=4.00MiB, used=0.00
> Metadata, DUP: total=392.00GiB, used=33.50GiB
> Metadata, si
On Feb 27, 2014, at 9:21 PM, Dave Chinner wrote:
>>
>> http://lists.centos.org/pipermail/centos/2011-April/109142.html
>
>
>
> No, he didn't fill it with 16TB of data and then have it fail. He
> made a new filesystem *larger* than 16TB and tried to mount it:
>
> | On a CentOS 32-bit backup s
On Thu, 27 Feb 2014 12:19:05 -0600
Justin Brown wrote:
> I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
Do you sleep well at night knowing that if one disk fails, you end up with
basically a RAID0 of 7x3TB disks? And that if 2nd one encounters unreadable
sector during rebui
On Thu, Feb 27, 2014 at 05:27:48PM -0700, Chris Murphy wrote:
>
> On Feb 27, 2014, at 5:12 PM, Dave Chinner
> wrote:
>
> > On Thu, Feb 27, 2014 at 02:11:19PM -0700, Chris Murphy wrote:
> >>
> >> On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
> >>
> >>> Yes it's an ancient 32 bit m
On Feb 27, 2014, at 5:12 PM, Dave Chinner wrote:
> On Thu, Feb 27, 2014 at 02:11:19PM -0700, Chris Murphy wrote:
>>
>> On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
>>
>>> Yes it's an ancient 32 bit machine. There must be a complex bug
>>> involved as the system, when originally
On Thu, Feb 27, 2014 at 02:11:19PM -0700, Chris Murphy wrote:
>
> On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
>
> > Yes it's an ancient 32 bit machine. There must be a complex bug
> > involved as the system, when originally mounted, claimed the
> > correct free space and only as
On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
> Yes it's an ancient 32 bit machine. There must be a complex bug involved as
> the system, when originally mounted, claimed the correct free space and only
> as used over time did the discrepancy between used and free grow. I'm afra
Yes it's an ancient 32 bit machine. There must be a complex bug involved as
the system, when originally mounted, claimed the correct free space and only as
used over time did the discrepancy between used and free grow. I'm afraid I
chose btrfs because it appeared capable of breaking the 16 ter
On Feb 27, 2014, at 12:27 PM, Chris Murphy wrote:
> This is on i686?
>
> The kernel page cache is limited to 16TB on i686, so effectively your block
> device is limited to 16TB. While the file system successfully creates, I
> think it's a bug that the mount -t btrfs command is probably a btrfs
On Feb 27, 2014, at 11:19 AM, Justin Brown wrote:
> I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
> need of help. Disk usage (du) shows 13 tera allocated yet strangely
> enough df shows approx. 780 gigs are free. It seems, somehow, btrfs
> has eaten roughly 4 tera intern
I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
need of help. Disk usage (du) shows 13 tera allocated yet strangely
enough df shows approx. 780 gigs are free. It seems, somehow, btrfs
has eaten roughly 4 tera internally. I've run a scrub and a balance
usage=5 with no success
28 matches
Mail list logo