On 6/20/17 12:06 PM, Edmund Nadolski wrote:
> It's been known for a while that the use of multiple lists
> that are periodically merged was an algorithmic problem within
> btrfs.  There are several workloads that don't complete in any
> reasonable amount of time (e.g. btrfs/130) and others that cause
> soft lockups.
> 
> The solution is to use a pair of rbtrees that do insertion merging
> for both indirect and direct refs, with the former converting
> refs into the latter.  The result is a btrfs/130 workload that
> used to take several hours now takes about half of that. This
> runtime still isn't acceptable and a future patch will address that
> by moving the rbtrees higher in the stack so the lookups can be
> shared across multiple calls to find_parent_nodes.
> 
> Signed-off-by: Edmund Nadolski <enadol...@suse.com>
> Signed-off-by: Jeff Mahoney <je...@suse.com>
[...]

> @@ -504,37 +665,22 @@ static int resolve_indirect_refs(struct btrfs_fs_info 
> *fs_info,
>       return ret;
>  }
>  
> -static inline int ref_for_same_block(struct prelim_ref *ref1,
> -                                  struct prelim_ref *ref2)
> -{
> -     if (ref1->level != ref2->level)
> -             return 0;
> -     if (ref1->root_id != ref2->root_id)
> -             return 0;
> -     if (ref1->key_for_search.type != ref2->key_for_search.type)
> -             return 0;
> -     if (ref1->key_for_search.objectid != ref2->key_for_search.objectid)
> -             return 0;
> -     if (ref1->key_for_search.offset != ref2->key_for_search.offset)
> -             return 0;
> -     if (ref1->parent != ref2->parent)
> -             return 0;
> -
> -     return 1;
> -}
> -
>  /*
>   * read tree blocks and add keys where required.
>   */
>  static int add_missing_keys(struct btrfs_fs_info *fs_info,
> -                         struct list_head *head)
> +                         struct preftrees *preftrees)
>  {
>       struct prelim_ref *ref;
>       struct extent_buffer *eb;
> +     struct rb_node *node = rb_first(&preftrees->indirect.root);
> +
> +     while (node) {
> +             ref = rb_entry(node, struct prelim_ref, rbnode);
> +             node = rb_next(&ref->rbnode);
> +             if (WARN(ref->parent, "BUG: direct ref found in indirect tree"))
> +                     return -EINVAL;
>  
> -     list_for_each_entry(ref, head, list) {
> -             if (ref->parent)
> -                     continue;
>               if (ref->key_for_search.type)
>                       continue;
>               BUG_ON(!ref->wanted_disk_byte);

Hi Ed -

I missed this in earlier review, but this can't work.  We're modifying
the ref in a way that the comparator will care about -- so the node
would move in the tree.

It's not a fatal flaw and, in fact, leaves us an opening to fix a
separate locking issue.

-Jeff

-- 
Jeff Mahoney
SUSE Labs

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to