At 02/04/2017 09:47 AM, Jorg Bornschein wrote:
February 4, 2017 1:07 AM, "Goldwyn Rodrigues" <rgold...@suse.de> wrote:
On 02/03/2017 06:30 PM, Jorg Bornschein wrote:
February 3, 2017 11:26 PM, "Goldwyn Rodrigues" <rgold...@suse.com> wrote:
Hi,
I'm currently running a balance (without any filters) on a 4 drives raid1
filesystem. The array
contains 3 3TB drives and one 6TB drive; I'm running the rebalance because the
6TB drive recently
replaced a 2TB drive.
I know that balance is not supposed to be a fast operation, but this one is now
running for ~6 days
and it managed to balance ~18% (754 out of about 4250 chunks balanced (755
considered), 82% left)
-- so I expect it to take another ~4 weeks.
That seems excessively slow for ~8TiB of data.
Is this expected behavior? In case it's not: Is there anything I can do to help
debug it?
Do you have quotas enabled?
I might have activated it when playing with "snapper" -- I remember using some
quota command
without knowing what it does.
How can I check its active? Shall I just disable it wit "btrfs quota disable"?
To check your quota limits:
# btrfs qgroup show <mountpoint>
To disable
# btrfs quota disable <mountpoint>
Yes, please check if disabling quotas makes a difference in execution
time of btrfs balance.
Quata support was indeed active -- and it warned me that the qroup data was
inconsistent.
Disabling quotas had an immediate impact on balance throughput -- it's *much*
faster now!
From a quick glance at iostat I would guess it's at least a factor 100 faster.
Should quota support generally be disabled during balances? Or did I somehow
push my fs into a weired state where it triggered a slow-path?
Thanks!
j
Would you please provide the kernel version?
v4.9 introduced a bad fix for qgroup balance, which doesn't completely
fix qgroup bytes leaking, but also hugely slow down the balance process:
commit 62b99540a1d91e46422f0e04de50fc723812c421
Author: Qu Wenruo <quwen...@cn.fujitsu.com>
Date: Mon Aug 15 10:36:51 2016 +0800
btrfs: relocation: Fix leaking qgroups numbers on data extents
Sorry for that.
And in v4.10, a better method is applied to fix the byte leaking
problem, and should be a little faster than previous one.
commit 824d8dff8846533c9f1f9b1eabb0c03959e989ca
Author: Qu Wenruo <quwen...@cn.fujitsu.com>
Date: Tue Oct 18 09:31:29 2016 +0800
btrfs: qgroup: Fix qgroup data leaking by using subtree tracing
However, using balance with qgroup is still slower than balance without
qgroup, the root fix needs us to rework current backref iteration.
Thanks,
Qu
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html