Re: [PATCH 4/4] btrfs: Fix data checksum error cause by replace with io-load.

2015-07-02 Thread Chris Mason
On Tue, Jun 30, 2015 at 10:26:18AM +0800, Qu Wenruo wrote: > To Chris: > > Would you consider merging these patchset for late 4.2 merge window? > If it's OK to merge it into 4.2 late rc, we'll start our test and send pull > request after our test, eta this Friday or next Monday. > > I know normal

Re: [PATCH v2 11/18] btrfs: qgroup: Add new qgroup calculation function btrfs_qgroup_account_extents().

2015-07-02 Thread David Sterba
On Wed, Apr 29, 2015 at 10:29:04AM +0800, Qu Wenruo wrote: > The new btrfs_qgroup_account_extents() function should be called in > btrfs_commit_transaction() and it will update all the qgroup according > to delayed_ref_root->dirty_extent_root. > > The new function can handle both normal operation

Re: Any hope of pool recovery?

2015-07-02 Thread Donald Pearson
Hello, At the bottom of this email are the results of the latest chunk-recover. I only included one example of the output that was printed prior to the summary information but it went up to the end of my screen buffer and beyond. So it looks like the command executed properly when none of the dr

strange corruptions found during btrfs check

2015-07-02 Thread Christoph Anton Mitterer
Hi. This is on a btrfs created and used with a 4.0 kernel. Not much was done on it, apart from send/receive snapshots from another btrfs (with -p). Some of the older snapshots (that were used as parents before) have been removed in the meantime). Now a btrfs check gives this: # btrfs check /dev/

Re: Any hope of pool recovery?

2015-07-02 Thread Chris Murphy
On Thu, Jul 2, 2015 at 8:49 AM, Donald Pearson wrote: > Which is curious because this is device id 2, where previously the > complaint was about device id 1. So can I believe dmesg about which > drive is actually the issue or is the drive that's printed in dmesg > just whichever drive happens to

Re: Any hope of pool recovery?

2015-07-02 Thread Chris Murphy
On Thu, Jul 2, 2015 at 8:49 AM, Donald Pearson wrote: > I do see plenty of complaints about the sdg drive (previously sde) in > /var/log/messages from the 28th which is when I started noticing > issues. Nothing is jumping out at me claiming the btrfs is taking > action but I may not know what to

possible enhancement: failing device converted to a seed device

2015-07-02 Thread Kyle Gates
I'll preface this with the fact that I'm just a user and am only posing a question for a possible enhancement to btrfs. I'm quite sure it isn't currently allowed but would it be possible to set a failing device as a seed instead of kicking it out of a multi-device filesystem? This would make t

Re: Any hope of pool recovery?

2015-07-02 Thread Donald Pearson
Unfortunately btrfs image fails with "couldn't read chunk tree". btrfs restore complains that every device is missing except the one that you specify on executing the command. Multiple devices as a parameter isn't an option. Specifcy /dev/disk/by-uuid/ claims that all devices are missing. I wen

Re: Any hope of pool recovery?

2015-07-02 Thread Chris Murphy
On Thu, Jul 2, 2015 at 12:19 PM, Donald Pearson wrote: > Unfortunately btrfs image fails with "couldn't read chunk tree". > > btrfs restore complains that every device is missing except the one > that you specify on executing the command. Multiple devices as a > parameter isn't an option. Specif

Re: Any hope of pool recovery?

2015-07-02 Thread Donald Pearson
I think it is. I have another raid5 pool that I've created to test the restore function on, and it worked. On Thu, Jul 2, 2015 at 1:26 PM, Chris Murphy wrote: > On Thu, Jul 2, 2015 at 12:19 PM, Donald Pearson > wrote: >> Unfortunately btrfs image fails with "couldn't read chunk tree". >> >> btr

Re: Any hope of pool recovery?

2015-07-02 Thread Chris Murphy
On Thu, Jul 2, 2015 at 12:32 PM, Donald Pearson wrote: > I think it is. I have another raid5 pool that I've created to test > the restore function on, and it worked. So you have all devices for this raid6 available, and yet when you use restore, you get missing device message for all devices exc

Re: Any hope of pool recovery?

2015-07-02 Thread Donald Pearson
That is correct. I'm going to rebalance my raid5 pool as raid6 and re-test just because. On Thu, Jul 2, 2015 at 1:37 PM, Chris Murphy wrote: > On Thu, Jul 2, 2015 at 12:32 PM, Donald Pearson > wrote: >> I think it is. I have another raid5 pool that I've created to test >> the restore function

Re: Any hope of pool recovery?

2015-07-02 Thread Donald Pearson
Yes it works with raid6 as well. [root@san01 btrfs-progs]# ./btrfs fi show Label: 'rockstor_rockstor' uuid: 08d14b6f-18df-4b1b-a91e-4b33e7c90c29 Total devices 1 FS bytes used 19.25GiB devid1 size 457.40GiB used 457.40GiB path /dev/sdt3 warning, device 4 is missing warning, de

Re: [PATCH 4/4] btrfs: Fix data checksum error cause by replace with io-load.

2015-07-02 Thread Qu Wenruo
Chris Mason wrote on 2015/07/02 08:42 -0400: On Tue, Jun 30, 2015 at 10:26:18AM +0800, Qu Wenruo wrote: To Chris: Would you consider merging these patchset for late 4.2 merge window? If it's OK to merge it into 4.2 late rc, we'll start our test and send pull request after our test, eta this F

Re: [PATCH v2 11/18] btrfs: qgroup: Add new qgroup calculation function btrfs_qgroup_account_extents().

2015-07-02 Thread Qu Wenruo
David Sterba wrote on 2015/07/02 16:43 +0200: On Wed, Apr 29, 2015 at 10:29:04AM +0800, Qu Wenruo wrote: The new btrfs_qgroup_account_extents() function should be called in btrfs_commit_transaction() and it will update all the qgroup according to delayed_ref_root->dirty_extent_root. The new f

[PATCH] btrfs: remove empty header file extent-tree.h

2015-07-02 Thread Qu Wenruo
The empty file is introduced as an careless 'git add', remove it. Reported-by: David Sterba Signed-off-by: Qu Wenruo --- fs/btrfs/extent-tree.h | 0 1 file changed, 0 insertions(+), 0 deletions(-) delete mode 100644 fs/btrfs/extent-tree.h diff --git a/fs/btrfs/extent-tree.h b/fs/btrfs/extent-

btrfs full, but not full, can't rebalance

2015-07-02 Thread Rich Rauenzahn
Running on CentOS7 ... / got full, I removed the files, but it still thinks it is full. I've tried following the FAQ, even adding a loopback device during the rebalance. # btrfs fi show / Label: 'centos7' uuid: 35f0ce3f-0902-47a3-8ad8-86179d1f3e3a Total devices 2 FS bytes used 24.27GiB

Re: btrfs full, but not full, can't rebalance

2015-07-02 Thread Donald Pearson
Have you seen this article? I think the interesting part for you is the "balance cannot run because the filesystem is full" heading. http://marc.merlins.org/perso/btrfs/post_2014-05-04_Fixing-Btrfs-Filesystem-Full-Problems.html On Fri, Jul 3, 2015 at 12:32 AM, Rich Rauenzahn wrote: > Running on

Re: btrfs full, but not full, can't rebalance

2015-07-02 Thread Rich Rauenzahn
Yes, I tried that -- and adding the loopback device. # btrfs device add /dev/loop1 / Performing full device TRIM (5.00GiB) ... # btrfs fi show / Label: 'centos7' uuid: 35f0ce3f-0902-47a3-8ad8-86179d1f3e3a Total devices 3 FS bytes used 17.13GiB devid1 size 111.11GiB used 111.1

Re: btrfs full, but not full, can't rebalance

2015-07-02 Thread Donald Pearson
Because this is raid1 I believe you need another for that to work. On Fri, Jul 3, 2015 at 12:57 AM, Rich Rauenzahn wrote: > Yes, I tried that -- and adding the loopback device. > > # btrfs device add /dev/loop1 / > Performing full device TRIM (5.00GiB) ... > > # btrfs fi show / > Label: 'centos7'

Re: btrfs full, but not full, can't rebalance

2015-07-02 Thread Rich Rauenzahn
Yes -- I just figured that out as well! Now why did it suddenly fill up? (I still get the failure rebalancing ...) # btrfs fi show / Label: 'centos7' uuid: 35f0ce3f-0902-47a3-8ad8-86179d1f3e3a Total devices 4 FS bytes used 17.12GiB devid1 size 111.11GiB used 111.05GiB path

Re: btrfs full, but not full, can't rebalance

2015-07-02 Thread Donald Pearson
what does the fi df , or btrfs fi usage show now On Fri, Jul 3, 2015 at 1:03 AM, Rich Rauenzahn wrote: > Yes -- I just figured that out as well! > > Now why did it suddenly fill up? (I still get the failure rebalancing ...) > > # btrfs fi show / > Label: 'centos7' uuid: 35f0ce3f-0902-47a3-8ad8

linux 4.1 - memory leak (possibly dedup related)

2015-07-02 Thread Marcel Ritter
Hi, I've been running some btrfs tests (mainly duperemove related) with linux kernel 4.1 for the last few days. Now I noticed by accident (dying processes), that all my memory (128 GB!) is gone. "Gone" meaning, there's no user space process allocating this memory. Digging deeper I found the miss