On Wed, Apr 03, 2019 at 03:21:22PM +1100, Tobin C. Harding wrote:
> +void xa_object_migrate(struct xa_node *node, int numa_node)
> +{
> +     struct xarray *xa = READ_ONCE(node->array);
> +     void __rcu **slot;
> +     struct xa_node *new_node;
> +     int i;
> +
> +     /* Freed or not yet in tree then skip */
> +     if (!xa || xa == XA_RCU_FREE)
> +             return;
> +
> +     new_node = kmem_cache_alloc_node(radix_tree_node_cachep,
> +                                      GFP_KERNEL, numa_node);
> +     if (!new_node)
> +             return;
> +
> +     xa_lock_irq(xa);
> +
> +     /* Check again..... */
> +     if (xa != node->array || !list_empty(&node->private_list)) {
> +             node = new_node;
> +             goto unlock;
> +     }
> +
> +     memcpy(new_node, node, sizeof(struct xa_node));
> +
> +     /* Move pointers to new node */
> +     INIT_LIST_HEAD(&new_node->private_list);

Surely we can do something more clever, like ...

        if (xa != node->array) {
...
        if (list_empty(&node->private_list))
                INIT_LIST_HEAD(&new_node->private_list);
        else
                list_replace(&node->private_list, &new_node->private_list);


BTW, the raidx tree nodes / xa_nodes share the same slab cache; we need
to finish converting all radix tree & IDR users to the XArray before
this series can go in.

Reply via email to