Nice patch. However its better if we do this in the btrfs kernel
function btrfs_scan_one_device(). Since the non-canonicalize path
can still sneak through the btrfs specific mount option device=.
Any comments ?
My initial reaction is to avoid playing naming names within the
kernel. But since
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode-i_mapping-a_ops-writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit
On Wed, Sep 24, 2014 at 12:16 PM, Miao Xie mi...@cn.fujitsu.com wrote:
On Wed, 24 Sep 2014 11:28:26 +0100, Filipe Manana wrote:
[SNIP]
int btrfs_wait_marked_extents(struct btrfs_root *root,
+ struct btrfs_trans_handle *trans,
struct
Simone Ferretti posted on Tue, 23 Sep 2014 14:06:41 +0200 as excerpted:
we're testing BTRFS on our Debian server. After a lot of operations
simulating a RAID1 failure, every time I mount my BTRFS RAID1 volume the
kernel logs these messages:
[73894.436173] BTRFS: bdev /dev/etherd/e30.20
GEO posted on Tue, 23 Sep 2014 14:58:06 +0200 as excerpted:
Is that supposed to be that way? Why is readonly not enough to import
data using btrfs send?
This is a known issue. The subvolume itself needs to be set read-only,
and of course that can't be done when the whole filesystem is set
Wed, Sep 24, 2014 at 01:23:32PM +, Duncan wrote:
Simone Ferretti posted on Tue, 23 Sep 2014 14:06:41 +0200 as excerpted:
we're testing BTRFS on our Debian server. After a lot of operations
simulating a RAID1 failure, every time I mount my BTRFS RAID1 volume the
kernel logs these
Hello all seeing something odd with btrfs and lvm thin-provisioning snapshots.
Sent my finding to the lvm list and thought I might post here after some
feedback which I have included. Apologies in advance if this is bad form.
Please let me know. Hopefully the trail below is not too confusing.
I ran 'btrfs check --repair --init-extent-tree' and appear to be in an
infinite loop. It performed heavy IO for about 1.5 hours then the IO
stopped and the CPU stayed at 100%. It's been like that for more than 12
hours now.
I made a hardware change last week that resulted in unstable RAM so I
On Wed, Sep 24, 2014 at 02:34:45PM +, Robb Walker wrote:
Hello all seeing something odd with btrfs and lvm thin-provisioning
snapshots. Sent my finding to the lvm list and thought I might post
here after some feedback which I have included. Apologies in advance
if this is bad form. Please
I noticed the following:
(gdb) print nrscan
$19 = 1680726970
(gdb) print tree-cache_size
$20 = 1073741824
(gdb) print cache_hard_max
$21 = 1073741824
It appears that cache_size can not shrink below cache_hard_max so we
never end up breaking out of the loop. The FS in question is 30TB with
~26TB
Thanks very much! btrfs OR lvm thin prov gets me where I want to go, so I can
live with them being mutually exclusive. Though yes, I would have used them
both if I could. ;)
Thanks for the clear explanation and your explanation also syncs with what the
lvm developers also were thinking.
It
On Sun, 21 Sep 2014 11:05:46 Chris Murphy wrote:
On Sep 20, 2014, at 7:39 PM, Russell Coker russ...@coker.com.au wrote:
Anyway the new drive turned out to have some errors, writes failed and
I've
got a heap of errors such as the above.
I'm curious if smartctl -t conveyance reveals any
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode-i_mapping-a_ops-writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit
bi_sector and bi_size moved to bi_iter since commit 4f024f3797c4
(block: Abstract out bvec iterator)
Signed-off-by: Fabian Frederick f...@skynet.be
---
fs/btrfs/volumes.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index
Any idea how to recover? I can't cut-paste but it's
Total devices 1 FS bytes used 176.22GiB
size 233.59GiB used 233.59GiB
Basically it's been data allocation happy, since I haven't deleted
53GB at any point. Unfortunately, none of the chunks are at 0% usage
so a balance -dusage=0 finds nothing
On Fri, 2014-09-19 at 13:10 -0500, Jeb Thomson wrote:
With the advanced features of btrfs, it would be an additional
simple task to make different platters run in parallel.
In this case, say a disk has three platters, and so three seek heads
as well. If we can identify that much, and what
While we have a transaction ongoing, the VM might decide at any time
to call btree_inode-i_mapping-a_ops-writepages(), which will start
writeback of dirty pages belonging to btree nodes/leafs. This call
might return an error or the writeback might finish with an error
before we attempt to commit
On Wed, 24 Sep 2014 16:43:43 -0400, Dan Merillat wrote:
Any idea how to recover? I can't cut-paste but it's
Total devices 1 FS bytes used 176.22GiB
size 233.59GiB used 233.59GiB
The notorious -EBLOAT. But don't despair just yet.
Basically it's been data allocation happy, since I haven't
Simone Ferretti posted on Wed, 24 Sep 2014 16:28:35 +0200 as excerpted:
Wed, Sep 24, 2014 at 01:23:32PM +, Duncan wrote:
Simone Ferretti posted on Tue, 23 Sep 2014 14:06:41 +0200 as excerpted:
we're testing BTRFS on our Debian server. After a lot of operations
simulating a RAID1
Btrfs developers,
Can someone who knows how this is all supposed to work pass an eye
over this patch series to determine the validity of what is being
tested? Given the number of problems this patchset seems to expose,
it looks pretty important to me to get this into all of your regular
testing.
Hi David,
(2014/09/22 21:01), David Sterba wrote:
On Fri, Sep 19, 2014 at 05:52:17PM +0900, Satoru Takeuchi wrote:
@@ -99,7 +99,7 @@ find_prop_handler(const char *name,
return NULL;
}
-static int __btrfs_set_prop(struct btrfs_trans_handle *trans,
+int __btrfs_set_prop(struct
From: Naohiro Aota na...@elisp.net
Fix the following two problems in compression related ioctl() code.
a) Updating compression flags and updating inode attribute
in two separated transaction. So, if something bad happens
after the former, and before the latter, file system
would
22 matches
Mail list logo