2008/6/3 Nicolas Thery <[EMAIL PROTECTED]>:
> Hello,
>
> While studying the spinlock implementation, I found that
> spin_trylock_wr() does not decrement back
> globaldata.gd_spinlock_wr if it fails to get the lock.
[...]

I think there is another issue in the loop that tries to clear the
shared cached bits.

We fail to get the lock if some CPU has its bit set and is owning
*another* spinlock (or none).  It should presumably fail if still
holding the lock we're trying to get.

Index: src2/sys/kern/kern_spinlock.c
===================================================================
--- src2.orig/sys/kern/kern_spinlock.c  2008-06-03 22:12:33.000000000 +0200
+++ src2/sys/kern/kern_spinlock.c       2008-06-03 23:02:29.000000000 +0200
@@ -133,7 +133,7 @@ spin_trylock_wr_contested(globaldata_t g
        if ((value & SPINLOCK_EXCLUSIVE) == 0) {
                while (value) {
                        bit = bsfl(value);
-                       if (globaldata_find(bit)->gd_spinlock_rd != mtx) {
+                       if (globaldata_find(bit)->gd_spinlock_rd == mtx) {
                                atomic_swap_int(&mtx->lock, value);
                                --gd->gd_spinlocks_wr;
                                return (FALSE);

Reply via email to