On Thu, Dec 19, 2024 at 02:42:00AM +0100, Antonio Quartulli wrote:
> Similarly so kref_put_lock(), decrease the refcount
> and call bh_lock_sock(sk) if it reached 0.
> 
> This kref_put variant comes handy when in need of
> atomically cleanup any socket context along with
> setting the refcount to 0.
> 
> Cc: Will Deacon <w...@kernel.org> (maintainer:ATOMIC INFRASTRUCTURE)
> Cc: Peter Zijlstra <pet...@infradead.org> (maintainer:ATOMIC INFRASTRUCTURE)
> Cc: Boqun Feng <boqun.f...@gmail.com> (reviewer:ATOMIC INFRASTRUCTURE)
> Cc: Mark Rutland <mark.rutl...@arm.com> (reviewer:ATOMIC INFRASTRUCTURE)
> Cc: Andrew Morton <a...@linux-foundation.org>
> Signed-off-by: Antonio Quartulli <anto...@openvpn.net>
> ---
>  include/linux/kref.h     | 11 +++++++++++
>  include/linux/refcount.h |  3 +++
>  lib/refcount.c           | 32 ++++++++++++++++++++++++++++++++

[...]

> diff --git a/lib/refcount.c b/lib/refcount.c
> index 
> a207a8f22b3ca35890671e51c480266d89e4d8d6..76a728581aa49a41ef13f5141f3f2e9816d72e75
>  100644
> --- a/lib/refcount.c
> +++ b/lib/refcount.c
> @@ -7,6 +7,7 @@
>  #include <linux/refcount.h>
>  #include <linux/spinlock.h>
>  #include <linux/bug.h>
> +#include <net/sock.h>
>  
>  #define REFCOUNT_WARN(str)   WARN_ONCE(1, "refcount_t: " str ".\n")
>  
> @@ -156,6 +157,37 @@ bool refcount_dec_and_lock(refcount_t *r, spinlock_t 
> *lock)
>  }
>  EXPORT_SYMBOL(refcount_dec_and_lock);
>  
> +/**
> + * refcount_dec_and_lock_sock - return holding locked sock if able to 
> decrement
> + *                           refcount to 0
> + * @r: the refcount
> + * @sock: the sock to be locked
> + *
> + * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to
> + * decrement when saturated at REFCOUNT_SATURATED.
> + *
> + * Provides release memory ordering, such that prior loads and stores are 
> done
> + * before, and provides a control dependency such that free() must come 
> after.
> + * See the comment on top.
> + *
> + * Return: true and hold sock if able to decrement refcount to 0, false
> + *      otherwise
> + */
> +bool refcount_dec_and_lock_sock(refcount_t *r, struct sock *sock)
> +{
> +     if (refcount_dec_not_one(r))
> +             return false;
> +
> +     bh_lock_sock(sock);
> +     if (!refcount_dec_and_test(r)) {
> +             bh_unlock_sock(sock);
> +             return false;
> +     }
> +
> +     return true;
> +}
> +EXPORT_SYMBOL(refcount_dec_and_lock_sock);

It feels a little out-of-place to me having socket-specific functions in
lib/refcount.c. I'd suggest sticking this somewhere else _or_ maybe we
could generate this pattern of code:

#define REFCOUNT_DEC_AND_LOCKNAME(lockname, locktype, lock, unlock)     \
static __always_inline                                                  \
bool refcount_dec_and_lock_##lockname(refcount_t *r, locktype *l)       \
{                                                                       \
        ...

inside a generator macro in refcount.h, like we do for seqlocks in
linux/seqlock.h. The downside of that is the cost of inlining.

Will

Reply via email to