On 9/18/24 16:40, Uladzislau Rezki wrote:
>>
> Thank you for valuable feedback! Indeed it is hard to follow, even
> though it works correctly.
> I will add the comment and also break the loop on first queuing as you
> suggested!
> 
> It does not make sense to loop further because following iterations
> are never successful
> thus never overwrite "queued" variable(it never reaches the
> queue_rcu_work() call).
> 
> <snip>
>          bool queued = false;
>          ...
>          for (i = 0; i < KFREE_N_BATCHES; i++) {
>                 if (need_offload_krc(krcp)) {
>                          queued = queue_rcu_work(system_wq, &krwp->rcu_work);
>          ...
>          return queued;
> <snip>
> 
> if we queued, "if(need_offload_krc())" condition is never true anymore.
> 
> Below refactoring makes it clear. I will send the patch to address it.

Looks good, AFAICT. Can you send the full patch then? Thanks.

> <snip>
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index a60616e69b66..b1f883fcd918 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -3607,11 +3607,12 @@ kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp)
>                         }
> 
>                         // One work is per one batch, so there are three
> -                       // "free channels", the batch can handle. It can
> -                       // be that the work is in the pending state when
> -                       // channels have been detached following by each
> -                       // other.
> +                       // "free channels", the batch can handle. Break
> +                       // the loop since it is done with this CPU thus
> +                       // queuing an RCU work is _always_ success here.
>                         queued = queue_rcu_work(system_unbound_wq,
> &krwp->rcu_work);
> +                       WARN_ON_ONCE(!queued);
> +                       break;
>                 }
>         }
> <snip>
> 
> Thanks!
> 


Reply via email to