Le Wed, Oct 25, 2023 at 04:09:13PM +0200, Uladzislau Rezki (Sony) a écrit :
> +/*
> + * Helper function for rcu_gp_cleanup().
> + */
> +static void rcu_sr_normal_gp_cleanup(void)
> +{
> +     struct llist_node *head, *tail, *pos;
> +     int i = 0;
> +
> +     tail = READ_ONCE(sr.wait_tail);
> +     head = llist_del_all(&sr.wait);

This could be llist_empty() first to do a quick
cheap check. And then __llist_del_all() here because
it appears nothing else than gp kthread can touch sr.wait.

> +
> +     llist_for_each_safe(pos, head, head) {

Two times head intended here? There should be some
temporary storage in the middle.

> +             rcu_sr_normal_complete(pos);
> +
> +             if (++i == MAX_SR_WAKE_FROM_GP) {
> +                     /* If last, process it also. */
> +                     if (head && !head->next)
> +                             continue;
> +                     break;
> +             }
> +     }
> +
> +     if (head) {
> +             /* Can be not empty. */
> +             llist_add_batch(head, tail, &sr.done);
> +             queue_work(system_highpri_wq, &sr_normal_gp_cleanup);

So you can have:

* Queue to sr.curr is atomic fully ordered
* Check and move from sr.curr to sr.wait is atomic fully ordered
* Check from sr.wait can have a quick unatomic unordered
  llist_empty() check. Then extract unatomic unordered as well.
* If too many, move atomic/ordered to sr.done.

Am I missing something?

Reply via email to