Hello,
You have mentioned two issues when balance and fi show running
concurrently
my mail was a bit chaotic, but I get the stalls even on idle system.
Today I got
parent transid verify failed on 1559973888000 wanted 1819 found 1821
parent transid verify failed on 1559973888000 wanted
I'm sorry that this patch is not needed since inline extent will not go
into this routine,
so no overflow.
Please ignore the patch,
Thanks,
Qu
Original Message
Subject: [PATCH 1/2] btrfs: Add more check before read_extent_buffer()
to avoid read overflow.
From: Qu Wenruo
Also it would be nice to have checksums on the swap data. It's a bit of a waste
to pay for ECC RAM and then lose the ECC benefits as soon as data is paged out.
--
Sent from my Samsung Galaxy Note 3 with K-9 Mail.
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body
Also a device replace operation requires that the replacement be the same size
(or maybe larger). While a remove and replace allows the replacement to be
merely large enough to contain all the data. Given the size variation in what
might be called the same size disk by manufcturers this isn't
Some of the disks on my system were missing and I was able to hit
this issue.
Check tree block failed, want=12582912, have=0
read block failed check_tree_block
Couldn't read chunk root
warning devid 2 not found already
Check tree block failed, want=143360, have=0
read block
On Wed, 15 Oct 2014 17:19:59 -0400, Josef Bacik wrote:
In one of Dave's cleanup commits he forgot to call btrfs_end_io_wq_exit on
unload, which makes us unable to unload and then re-load the btrfs module.
This
fixes the problem. Thanks,
Signed-off-by: Josef Bacik jba...@fb.com
Robert White posted on Wed, 22 Oct 2014 22:18:09 -0700 as excerpted:
On 10/22/2014 09:30 PM, Chris Murphy wrote:
Sure. So if Btrfs is meant to address scalability, then perhaps at the
moment it's falling short. As it's easy to add large drives and get
very large multiple device volumes, the
On Thu, 18 Sep 2014 11:27:17 -0400, Josef Bacik wrote:
Trying to reproduce a log enospc bug I hit a panic in the async reclaim code
during log replay. This is because we use fs_info-fs_root as our root for
shrinking and such. Technically we can use whatever root we want, but let's
just not
Russell Coker posted on Thu, 23 Oct 2014 18:39:52 +1100 as excerpted:
Also a device replace operation requires that the replacement be the
same size (or maybe larger). While a remove and replace allows the
replacement to be merely large enough to contain all the data. Given the
size variation
On Thu, 2014-10-23 at 16:13 +0800, Anand Jain wrote:
Some of the disks on my system were missing and I was able to hit
this issue.
Check tree block failed, want=12582912, have=0
read block failed check_tree_block
Couldn't read chunk root
warning devid 2 not found
On Wed, 22 Oct 2014 14:40:47 +0200, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote:
Looks normal to me. Last time I started a balance after adding 6th device
to my FS, it took 4 days to move 25GBs of data.
It's
On 2014-10-22 16:08, Robert White wrote:
So the documentation is clear that you can't mount a swap file through
BTRFS (unless you use a loop device).
Why isn't a NOCOW file that has been fully pre-allocated -- as with
fallocate(1) -- not suitable for swapping?
I found one reference to an
On 2014-10-23 05:19, Miao Xie wrote:
On Wed, 22 Oct 2014 14:40:47 +0200, Piotr Pawłow wrote:
On 22.10.2014 03:43, Chris Murphy wrote:
On Oct 21, 2014, at 4:14 PM, Piotr Pawłowp...@siedziba.pl wrote:
Looks normal to me. Last time I started a balance after adding 6th device to my
FS, it took
btrfs_scan_lblikd() is called by most the device related command functions.
And btrfs_scan_lblkid() is most expensive function and it becomes more expensive
as number of devices in the system increase. Further some threads call this
function more than once for absolutely no extra benefit and the
my stap func profiling script was wrong, I got the number of
times scan_lblkid func called per thread wrong, now its
been corrected as below. yet calling the system-wide device
scan more than once per thread does not make any sense. There
are quite a number of threads like that as below.
On Wed, Oct 22, 2014 at 10:18:09PM -0700, Robert White wrote:
On 10/22/2014 09:30 PM, Chris Murphy wrote:
Sure. So if Btrfs is meant to address scalability, then perhaps at the
moment it's falling short. As it's easy to add large drives and get very
large multiple device volumes, the
Hello Gui,
Oh, it seems that there are btrfs with missing devs that are bringing
troubles to the @open_ctree_... function.
what do you mean by missing devs? I have no degraded fs.
The time btrfs fi sh spends scanning disks of a filesystem seems to
be proportional to the amount of data
there is no point in re-creating so many btrfs kernel's logic in user
space. its just unnecessary, when kernel is already doing it. use
some interface to get info from kernel after device is registered,
(not necessarily mounted). so progs can be as sleek as possible.
to me it started as
On Thu, Oct 23, 2014 at 02:41:47AM +, Duncan wrote:
Dave posted on Wed, 22 Oct 2014 08:49:46 -0400 as excerpted:
On Tue, Oct 21, 2014 at 10:08 PM, Duncan 1i5t5.dun...@cox.net wrote:
As for the mounted filesystem question, since all it does is flip a
switch so that new metadata writes
Hello,
I was under impression that the Transaction commit: setting in 'btrfs sub
del' finally allows us to make it not return until all free space from the
snapshots that are being deleted, is completely freed up.
However that does not seem to be the case at all, deleting 14 snapshots with a
On 22 October 2014 04:08, Duncan 1i5t5.dun...@cox.net wrote:
Since the kernel has code for both fat metadata and skinny-metadata,
they can exist side-by-side and the kernel will use whichever code is
appropriate.
I understand that the fat extent code will probably never be removed
for
On 23.10.2014 16:24, Roman Mamedov wrote:
I was under impression that the Transaction commit: setting in 'btrfs sub
del' finally allows us to make it not return until all free space from the
snapshots that are being deleted, is completely freed up.
This is not what commit-each or commit-after
On Thu, 23 Oct 2014 17:44:46 +0200
Piotr Pawłow p...@siedziba.pl wrote:
On 23.10.2014 16:24, Roman Mamedov wrote:
I was under impression that the Transaction commit: setting in 'btrfs sub
del' finally allows us to make it not return until all free space from the
snapshots that are being
I'll ask again...
Is there any reason it would be Bad™ to allow a snapshot subvolume to be
promoted to a non-snapshot subvolume?
I know that there is precious little difference between the two. But
there _is_ a difference once you start trying to automate system
maintenance.
What I want
Hello,
First, I'd like to thank you for this is interesting discussion
and for pointing efficient snapshotting strategies.
My 5k snapshots actually come from 4 subvolumes. I create 8 snapshots
per hour because I actually create both a read-only and writable
snapshots for each of my volume. Yeah
I've got several questions about mount features that I've been unable to
find definitive answers for.
ITEM: So there are some mount options that I'd like to be able to pin
onto a media like compress=lzo on a thumb drive I expect to get crowded.
Is there a feature equivalent to the -o option
I have a 240GB VirtualBox vdi image that is showing heavy fragmentation
(filefrag). The file was created in a dir that was chattr +C'd, the file
was created via fallocate and the contents of the orignal image were
copied into the file via dd. I verified that the image was +C.
After initial
I attempted to run btrfs check --repair, but it got stuck spinning
in what appeared to be an infinite loop. strace and ltrace revealed
nothing, and gdb wasn't particularly helpful, so I rebuilt btrfs with
debug symbols and tried again.
Now I get this from btrfs check:
Couldn't map the
Is this related to your 5k snapshot drive and your attempt to go back
kernel revs from 3.17.0 etc?
I see that you are using 3.17.1 kernel. Are you also up to the 3.17
version of the btrfs tools?
You may be in deep error land from the long use of 3.10... that said,
the --init-csum-tree or
Austin S Hemmelgarn posted on Thu, 23 Oct 2014 07:39:28 -0400 as
excerpted:
On 2014-10-23 05:19, Miao Xie wrote:
Now my colleague and I is implementing the scrub/replace for RAID5/6
and I have a plan to reimplement the balance and split it off from the
metadata/file data process. the main
On Thu, Oct 23, 2014 at 05:28:58PM -0700, Robert White wrote:
Is this related to your 5k snapshot drive and your attempt to go
back kernel revs from 3.17.0 etc?
This filesystem has four subvolumes: a mostly empty root subvolume,
one containing ~13TB of data, and two read-write snapshot
Tobias Geerinckx-Rice posted on Thu, 23 Oct 2014 16:47:19 +0200 as
excerpted:
On 22 October 2014 04:08, Duncan 1i5t5.dun...@cox.net wrote:
Since the kernel has code for both fat metadata and skinny-metadata,
they can exist side-by-side and the kernel will use whichever code is
appropriate.
On Thu, Oct 23, 2014 at 09:24:48PM -0400, Zygo Blaxell wrote:
On Thu, Oct 23, 2014 at 05:28:58PM -0700, Robert White wrote:
You may be in deep error land from the long use of 3.10... that
said, the --init-csum-tree or --init-extent-tree options may be your
friend here. The backtrace shows
Robert White posted on Thu, 23 Oct 2014 15:27:06 -0700 as excerpted:
I've got several questions about mount features that I've been unable to
find definitive answers for.
ITEM: So there are some mount options that I'd like to be able to pin
onto a media like compress=lzo on a thumb drive I
On Fri, Oct 24, 2014 at 01:05:39AM +, Duncan wrote:
Austin S Hemmelgarn posted on Thu, 23 Oct 2014 07:39:28 -0400 as
excerpted:
On 2014-10-23 05:19, Miao Xie wrote:
Now my colleague and I is implementing the scrub/replace for RAID5/6
and I have a plan to reimplement the balance and
I just stumbled across this bug a few hours ago.
It's still in btrfs-progs 3.17.
On Mon, Sep 29, 2014 at 11:20:06AM +0800, Qu Wenruo wrote:
Ping.
No response?
Thanks,
Qu
Original Message
Subject: Re: [bug] btrfs check --subvol-extents segfault
From: Eric Sandeen
On 10/23/2014 07:25 PM, Duncan wrote:
See the discussion above. As for whether conflicting options error out,
get ignored, or update the whole filesystem, there has been some
discussion on the list but IDR the conclusion as it doesn't pertain to me
since I don't use subvolumes like that,
37 matches
Mail list logo