On 9/30/24 13:37, Uladzislau Rezki (Sony) wrote:
> Improve readability of kvfree_rcu_queue_batch() function
> in away that, after a first batch queuing, the loop is break
> and success value is returned to a caller.
> 
> There is no reason to loop and check batches further as all
> outstanding objects have already been picked and attached to
> a certain batch to complete an offloading.
> 
> Link: https://lore.kernel.org/lkml/ZvWUt2oyXRsvJRNc@pc636/T/
> Suggested-by: Linus Torvalds <torva...@linux-foundation.org>
> Signed-off-by: Uladzislau Rezki (Sony) <ure...@gmail.com>

Applied to slab/for-next, thanks!

> ---
>  kernel/rcu/tree.c | 9 +++++----
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index a60616e69b66..b1f883fcd918 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -3607,11 +3607,12 @@ kvfree_rcu_queue_batch(struct kfree_rcu_cpu *krcp)
>                       }
>  
>                       // One work is per one batch, so there are three
> -                     // "free channels", the batch can handle. It can
> -                     // be that the work is in the pending state when
> -                     // channels have been detached following by each
> -                     // other.
> +                     // "free channels", the batch can handle. Break
> +                     // the loop since it is done with this CPU thus
> +                     // queuing an RCU work is _always_ success here.
>                       queued = queue_rcu_work(system_unbound_wq, 
> &krwp->rcu_work);
> +                     WARN_ON_ONCE(!queued);
> +                     break;
>               }
>       }
>  


Reply via email to