Did more digging today. Here is where the -ENOSPC is coming from:

btrfs_run_delayed_refs ->          // WARN here
__btrfs_run_delayed_refs ->
btrfs_run_delayed_refs_for_head ->
run_one_delayed_ref ->
run_delayed_data_ref ->
__btrfs_inc_extent_ref ->
insert_extent_backref ->
insert_extent_data_ref ->
btrfs_insert_empty_item ->
btrfs_insert_empty_items ->
btrfs_search_slot ->
split_leaf ->
alloc_tree_block_no_bg_flush ->
btrfs_alloc_tree_block ->
use_block_rsv ->
block_rsv_use_bytes / reserve_metadata_bytes

In use_block_rsv, first block_rsv_use_bytes (with the BTRFS_BLOCK_RSV_DELREFS one) fails, then reserve_metadata_bytes fails, then block_rsv_use_bytes with global_rsv fails again.

My understanding of this in plain English is as follows: btrfs attempted to finalize a transaction and add the queued backreferences. When doing so, it ran out of space in a B-tree, and attempted to allocate a new tree block; however, in doing so, it hit the limit it reserved for itself for how much space it was going to use during that operation, so it gave up on the whole thing, which led everything to go downhill from there. Is this anywhere close to being accurate?

BTW, the DELREFS rsv is 0 / 7GB reserved/free. So, it looks like it didn't expect to allocate the new tree node at all? Perhaps it should be using some other rsv for those?

Am I on the right track, or should I be discussing this elsewhere / with someone else?

On 20/07/2019 10.59, Vladimir Panteleev wrote:
Hi,

I've done a few experiments and here are my findings.

First I probably should describe the filesystem: it is a snapshot archive, containing a lot of snapshots for 4 subvolumes, totaling 2487 subvolumes/snapshots. There are also a few files (inside the snapshots) that are probably very fragmented. This is probably what causes the bug.

Observations:

- If I delete all snapshots, the bug disappears (device delete succeeds).
- If I delete all but any single subvolume's snapshots, the bug disappears.
- If I delete one of two subvolumes' snapshots, the bug disappears, but stays if I delete one of the other two subvolumes' snapshots.

It looks like two subvolumes' snapshots' data participates in causing the bug.

In theory, I guess it would be possible to reduce the filesystem to the minimal one causing the bug by iteratively deleting snapshots / files and checking if the bug manifests, but it would be extremely time-consuming, probably requiring weeks.

Anything else I can do to help diagnose / fix it? Or should I just order more HDDs and clone the RAID10 the right way?

On 06/07/2019 05.51, Qu Wenruo wrote:


On 2019/7/6 下午1:13, Vladimir Panteleev wrote:
[...]
I'm not sure if it's the degraded mount cause the problem, as the
enospc_debug output looks like reserved/pinned/over-reserved space has
taken up all space, while no new chunk get allocated.

The problem happens after replace-ing the missing device (which succeeds
in full) and then attempting to remove it, i.e. without a degraded mount.

Would you please try to balance metadata to see if the ENOSPC still
happens?

The problem also manifests when attempting to rebalance the metadata.

Have you tried to balance just one or two metadata block groups?
E.g using -mdevid or -mvrange?

And did the problem always happen at the same block group?

Thanks,
Qu

Thanks!




--
Best regards,
 Vladimir

Reply via email to