Absolutely. I'd like to know the answer to this, as 13 tera will take
a considerable amount of time to back up anywhere, assuming I find a
place. I'm considering rebuilding a smaller raid with newer drives
(it was originally built using 16 250 gig western digital drives, it's
about eleven years o
Apologies for the late reply, I'd assumed the issue was closed even
given the unusual behavior. My mount options are:
/dev/sdb1 on /var/lib/nobody/fs/ubfterra type btrfs
(rw,noatime,nodatasum,nodatacow,noacl,space_cache,skip_balance)
I only recently added nodatacow and skip_balance in an attempt
On Fri, 28 Feb 2014 07:27:06 + (UTC)
Duncan <1i5t5.dun...@cox.net> wrote:
> Based on what I've read on-list, btrfs is not arch-agnostic, with certain
> on-disk sizes set to native kernel page size, etc, so a filesystem
> created on one arch may well not work on another.
>
> Question: Does t
Roman Mamedov posted on Fri, 28 Feb 2014 10:34:36 +0600 as excerpted:
> But then as others mentioned it may be risky to use this FS on 32-bit at
> all, so I'd suggest trying anything else only after you reboot into a
> 64-bit kernel.
Based on what I've read on-list, btrfs is not arch-agnostic, wi
GEO posted on Thu, 27 Feb 2014 14:10:25 +0100 as excerpted:
> Does anyone have a technical info regarding the reliability of the
> incremental backup process using the said method?
Stepping back from your specific method for a moment...
You're using btrfs send/receive, which I wouldn't exactly c
On Feb 27, 2014, at 11:13 PM, Chris Murphy wrote:
>
> On Feb 27, 2014, at 11:19 AM, Justin Brown wrote:
>
>> terra:/var/lib/nobody/fs/ubfterra # btrfs fi df .
>> Data, single: total=17.58TiB, used=17.57TiB
>> System, DUP: total=8.00MiB, used=1.93MiB
>> System, single: total=4.00MiB, used=0.00
On Feb 27, 2014, at 11:19 AM, Justin Brown wrote:
> terra:/var/lib/nobody/fs/ubfterra # btrfs fi df .
> Data, single: total=17.58TiB, used=17.57TiB
> System, DUP: total=8.00MiB, used=1.93MiB
> System, single: total=4.00MiB, used=0.00
> Metadata, DUP: total=392.00GiB, used=33.50GiB
> Metadata, si
On Feb 27, 2014, at 9:21 PM, Dave Chinner wrote:
>>
>> http://lists.centos.org/pipermail/centos/2011-April/109142.html
>
>
>
> No, he didn't fill it with 16TB of data and then have it fail. He
> made a new filesystem *larger* than 16TB and tried to mount it:
>
> | On a CentOS 32-bit backup s
On Thu, Feb 27, 2014 at 04:07:06PM -0500, Josef Bacik wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 02/27/2014 04:05 PM, Chris Murphy wrote:
> > User reports successfully formatting and using an ~18TB Btrfs
> > volume on hardware raid5 using i686 kernel for over a year, and
> > t
On Thu, 27 Feb 2014 12:19:05 -0600
Justin Brown wrote:
> I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
Do you sleep well at night knowing that if one disk fails, you end up with
basically a RAID0 of 7x3TB disks? And that if 2nd one encounters unreadable
sector during rebui
On Thu, Feb 27, 2014 at 05:27:48PM -0700, Chris Murphy wrote:
>
> On Feb 27, 2014, at 5:12 PM, Dave Chinner
> wrote:
>
> > On Thu, Feb 27, 2014 at 02:11:19PM -0700, Chris Murphy wrote:
> >>
> >> On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
> >>
> >>> Yes it's an ancient 32 bit m
Replace the fs_info->cache_workers with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
None
Replace the fs_info->submit_workers with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
Non
Use the newly created btrfs_workqueue_struct to replace the original
fs_info->workers
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
None
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
None
---
fs/btrfs/ctree.h | 2 +-
fs/btrfs/d
Replace the fs_info->endio_* workqueues with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
The original btrfs_workers has thresholding functions to dynamically
create or destroy kthreads.
Though there is no such function in kernel workqueue because the worker
is not created manually, we can still use the workqueue_set_max_active
to simulated the behavior, mainly to achieve a better HDD
Replace the fs_info->fixup_workers with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
None
Much like the fs_info->workers, replace the fs_info->delalloc_workers
use the same btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
None
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
None
---
fs/btrfs/ctree.h | 2
Much like the fs_info->workers, replace the fs_info->submit_workers
use the same btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
None
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
None
---
fs/btrfs/ctree.h | 2 +-
Replace the fs_info->delayed_workers with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
No
Since the "_struct" suffix is mainly used for distinguish the differnt
btrfs_work between the original and the newly created one,
there is no need using the suffix since all btrfs_workers are changed
into btrfs_workqueue.
Also this patch fixed some codes whose code style is changed due to the
too
Since all the btrfs_worker is replaced with the newly created
btrfs_workqueue, the old codes can be easily remove.
Signed-off-by: Quwenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Reuse the old async-thred.[ch] files.
v3->v4:
- Reuse the old WORK_* bits.
v4->v5:
None
The struct async_sched is not used by any codes and can be removed.
Signed-off-by: Qu Wenruo
Reviewed-by: Josef Bacik
Tested-by: David Sterba
---
Changelog:
v1->v2:
None.
v2->v3:
None.
v3->v4:
None:
v4->v5:
None
---
fs/btrfs/volumes.c | 7 ---
1 file changed, 7 deletions(-)
diff -
Replace the fs_info->readahead_workers with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
Replace the fs_info->rmw_workers with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
None
-
Use kernel workqueue to implement a new btrfs_workqueue_struct, which
has the ordering execution feature like the btrfs_worker.
The func is executed in a concurrency way, and the
ordred_func/ordered_free is executed in the sequence them are queued
after the corresponding func is done.
The new btr
Add high priority function to btrfs_workqueue.
This is implemented by embedding a btrfs_workqueue into a
btrfs_workqueue and use some helper functions to differ the normal
priority wq and high priority wq.
So the high priority wq is completely independent from the normal
workqueue.
Signed-off-by:
Replace the fs_info->scrub_* with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
None
---
Replace the fs_info->qgroup_rescan_worker with the newly created
btrfs_workqueue.
Signed-off-by: Qu Wenruo
Tested-by: David Sterba
---
Changelog:
v1->v2:
None
v2->v3:
- Use the btrfs_workqueue_struct to replace submit_workers.
v3->v4:
- Use the simplified btrfs_alloc_workqueue API.
v4->v5:
Add a new btrfs_workqueue_struct which use kernel workqueue to implement
most of the original btrfs_workers, to replace btrfs_workers.
With this patchset, redundant workqueue codes are replaced with kernel
workqueue infrastructure, which not only reduces the code size but also the
effort to mainta
Hi Marc,
On 02/28/2014 03:06 AM, Marc MERLIN wrote:
This does not happen consistently, but sometimes:
PM: Preparing system for mem sleep
Freezing user space processes ...
(...)
Freezing of tasks failed after 20.002 seconds (1 tasks refusing to freeze,
wq_busy=0):
btrfs D 8801
On Feb 27, 2014, at 5:12 PM, Dave Chinner wrote:
> On Thu, Feb 27, 2014 at 02:11:19PM -0700, Chris Murphy wrote:
>>
>> On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
>>
>>> Yes it's an ancient 32 bit machine. There must be a complex bug
>>> involved as the system, when originally
On Thu, Feb 27, 2014 at 02:11:19PM -0700, Chris Murphy wrote:
>
> On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
>
> > Yes it's an ancient 32 bit machine. There must be a complex bug
> > involved as the system, when originally mounted, claimed the
> > correct free space and only as
On Thu, Feb 27, 2014 at 11:06:56AM -0800, Marc MERLIN wrote:
> This does not happen consistently, but sometimes:
>
> PM: Preparing system for mem sleep
> Freezing user space processes ...
> (...)
> Freezing of tasks failed after 20.002 seconds (1 tasks refusing to freeze,
> wq_busy=0):
> btrfs
On Feb 27, 2014, at 2:07 PM, Josef Bacik wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 02/27/2014 04:05 PM, Chris Murphy wrote:
>> User reports successfully formatting and using an ~18TB Btrfs
>> volume on hardware raid5 using i686 kernel for over a year, and
>> then suddenly
On Feb 27, 2014, at 1:49 PM, otakujunct...@gmail.com wrote:
> Yes it's an ancient 32 bit machine. There must be a complex bug involved as
> the system, when originally mounted, claimed the correct free space and only
> as used over time did the discrepancy between used and free grow. I'm afra
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/27/2014 04:05 PM, Chris Murphy wrote:
> User reports successfully formatting and using an ~18TB Btrfs
> volume on hardware raid5 using i686 kernel for over a year, and
> then suddenly the file system starts behaving weirdly:
>
> https://urldefen
User reports successfully formatting and using an ~18TB Btrfs volume on
hardware raid5 using i686 kernel for over a year, and then suddenly the file
system starts behaving weirdly:
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg31856.html
I think this is due to the kernel page cach
Yes it's an ancient 32 bit machine. There must be a complex bug involved as
the system, when originally mounted, claimed the correct free space and only as
used over time did the discrepancy between used and free grow. I'm afraid I
chose btrfs because it appeared capable of breaking the 16 ter
On Feb 27, 2014, at 12:27 PM, Chris Murphy wrote:
> This is on i686?
>
> The kernel page cache is limited to 16TB on i686, so effectively your block
> device is limited to 16TB. While the file system successfully creates, I
> think it's a bug that the mount -t btrfs command is probably a btrfs
On Feb 27, 2014, at 11:19 AM, Justin Brown wrote:
> I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
> need of help. Disk usage (du) shows 13 tera allocated yet strangely
> enough df shows approx. 780 gigs are free. It seems, somehow, btrfs
> has eaten roughly 4 tera intern
This does not happen consistently, but sometimes:
PM: Preparing system for mem sleep
Freezing user space processes ...
(...)
Freezing of tasks failed after 20.002 seconds (1 tasks refusing to freeze,
wq_busy=0):
btrfs D 88017639c800 0 12239 12224 0x0084
880165ec196
I've a 18 tera hardware raid 5 (areca ARC-1170 w/ 8 3 gig drives) in
need of help. Disk usage (du) shows 13 tera allocated yet strangely
enough df shows approx. 780 gigs are free. It seems, somehow, btrfs
has eaten roughly 4 tera internally. I've run a scrub and a balance
usage=5 with no success
Hi,
I can't give you a specific answer to your question. But because btrfs
is still under heavy development you shouldn't use it with those old
kernels at all in my oppinion. You should never be more than one
version away from the current stable kernel.
Regards,
Felix
On Thu, Feb 27, 2014 at 5:3
On Wed, Feb 26, 2014 at 05:10:05PM +0800, Miao Xie wrote:
> On Sat, 22 Feb 2014 01:23:37 +0100, David Sterba wrote:
> > On Thu, Feb 20, 2014 at 06:08:54PM +0800, Miao Xie wrote:
> >> @@ -1352,13 +1347,15 @@ static struct btrfs_root *alloc_log_tree(struct
> >> btrfs_trans_handle *trans,
> >>roo
I read that usage of a btrfs volume with a newer kernel can render it
unreadable when that same volume is used with an older kernel. I have
a mobile storage device that will be used by different linux
distributions and kernels. What are the kernel version
incompatibilities I might have to worry abo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/27/2014 10:38 AM, 钱凯 wrote:
> I'm a little confused of what "avg_delayed_ref_runtime" means.
>
> In __btrfs_run_delayed_refs(), "avg_delayed_ref_runtime" is set to
> the runtime of all delayed refs processed in current transaction
> commit. Howe
I'm a little confused of what "avg_delayed_ref_runtime" means.
In __btrfs_run_delayed_refs(), "avg_delayed_ref_runtime" is set to the
runtime of all delayed refs processed in current transaction commit.
However, in btrfs_should_throttle_delayed_refs(), we based on the
following condition to decide
If we are cycling through all of the mirrors trying to find the best one we need
to make sure we set best_mirror to an actual mirror number and not 0. Otherwise
we could end up reading a mirror that wasn't the best and make everybody sad.
Thanks,
Signed-off-by: Josef Bacik
---
disk-io.c | 2 +-
When working with a user who had a broken file system I noticed that we were
reading a bad copy of a block when the other copy was perfectly fine. This is
because we don't keep track of the parent generation for tree blocks, so we just
read whichever copy we damned well please with no regards for
@Kai, Thank you very much for your reply. Sorry, I just saw it now.
I will take care of the mailing issue now, so that it does not happen again in
the future.
Sorry for the inconveniences!
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to major
Does anyone have a technical info regarding the reliability of the incremental
backup process using the said method?
(Apart from all the recommendations not to do it that way)
So the question I am interested in: Should it work or not?
I did some testing myself and it seemed to work, however I cann
Hi,
I am the Arch user who initially reported this problem to the AUR (
https://aur.archlinux.org/packages/linux-mainline/).
2014-02-27 13:43 GMT+01:00 Filipe David Manana :
> On Wed, Feb 26, 2014 at 11:26 PM, WorMzy Tykashi
> wrote:
> > On 29 January 2014 21:06, Filipe David Borba Manana
> w
On Wed, Feb 26, 2014 at 11:26 PM, WorMzy Tykashi
wrote:
> On 29 January 2014 21:06, Filipe David Borba Manana
> wrote:
>> After the commit titled "Btrfs: fix btrfs boot when compiled as built-in",
>> LIBCRC32C requirement was removed from btrfs' Kconfig. This made it not
>> possible to build a k
Currently to check whether a directory has been created, we search
DIR_INDEX items one by one to check if children has been processed.
Try to picture such a scenario:
.
|-- dir(ino X)
|-- foo_1(ino X+1)
|-- ...
|-- foo_k(ino X+k)
With the curre
It is really unnecessary to search tree again for @gen, @mode and @rdev
in the case of REG inodes' creation, as we've got btrfs_inode_item in sctx,
and @gen, @mode and @rdev can easily be fetched.
Signed-off-by: Liu Bo
---
fs/btrfs/send.c | 19 +++
1 file changed, 15 insertions(+
On Thu, Feb 27, 2014 at 04:01:23PM +0800, Wang Shilong wrote:
> On 02/27/2014 03:47 PM, Liu Bo wrote:
> >Currently to check whether a directory has been created, we search
> >DIR_INDEX items one by one to check if children has been processed.
> >
> >Try to picture such a scenario:
> >.
> >|
On 02/27/2014 03:47 PM, Liu Bo wrote:
Currently to check whether a directory has been created, we search
DIR_INDEX items one by one to check if children has been processed.
Try to picture such a scenario:
.
|-- dir(ino X)
|-- foo_1(ino X+1)
|-- ...
58 matches
Mail list logo