On 21.11.18 г. 20:59 ч., Josef Bacik wrote:
> From: Josef Bacik <jba...@fb.com>
>
> We use this number to figure out how many delayed refs to run, but
> __btrfs_run_delayed_refs really only checks every time we need a new
> delayed ref head, so we always run at least one ref head completely no
> matter what the number of items on it. Fix the accounting to only be
> adjusted when we add/remove a ref head.
LGTM:
Reviewed-by: Nikolay Borisov <nbori...@suse.com>
However, what if we kill delayed_ref_updates since the name is a bit
ambiguous and instead migrate num_heads_ready from delayed_refs to trans
and use that? Otherwise, as stated previously num_heads_ready is
currently unused and could be removed.
>
> Signed-off-by: Josef Bacik <jba...@fb.com>
> ---
> fs/btrfs/delayed-ref.c | 3 ---
> 1 file changed, 3 deletions(-)
>
> diff --git a/fs/btrfs/delayed-ref.c b/fs/btrfs/delayed-ref.c
> index b3e4c9fcb664..48725fa757a3 100644
> --- a/fs/btrfs/delayed-ref.c
> +++ b/fs/btrfs/delayed-ref.c
> @@ -251,8 +251,6 @@ static inline void drop_delayed_ref(struct
> btrfs_trans_handle *trans,
> ref->in_tree = 0;
> btrfs_put_delayed_ref(ref);
> atomic_dec(&delayed_refs->num_entries);
> - if (trans->delayed_ref_updates)
> - trans->delayed_ref_updates--;
> }
>
> static bool merge_ref(struct btrfs_trans_handle *trans,
> @@ -467,7 +465,6 @@ static int insert_delayed_ref(struct btrfs_trans_handle
> *trans,
> if (ref->action == BTRFS_ADD_DELAYED_REF)
> list_add_tail(&ref->add_list, &href->ref_add_list);
> atomic_inc(&root->num_entries);
> - trans->delayed_ref_updates++;
> spin_unlock(&href->lock);
> return ret;
> }
>