On 12.12.2023 10:47, Juergen Gross wrote:
> --- a/xen/common/spinlock.c
> +++ b/xen/common/spinlock.c
> @@ -458,6 +458,23 @@ void _spin_barrier(spinlock_t *lock)
>      spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR);
>  }
>  
> +int rspin_is_locked(const rspinlock_t *lock)
> +{
> +    /*
> +     * Recursive locks may be locked by another CPU, yet we return
> +     * "false" here, making this function suitable only for use in
> +     * ASSERT()s and alike.
> +     */
> +    return lock->recurse_cpu == SPINLOCK_NO_CPU
> +           ? spin_is_locked_common(&lock->tickets)
> +           : lock->recurse_cpu == smp_processor_id();
> +}
> +
> +void rspin_barrier(rspinlock_t *lock)
> +{
> +    spin_barrier_common(&lock->tickets, &lock->debug, LOCK_PROFILE_PAR);
> +}

Ah, here we go. Looks all okay to me, but needs re-ordering such that the
earlier patch won't transiently introduce a regression.

Jan

Reply via email to