On Sun, Jan 16, 2005 at 11:09:22PM -0800, Andrew Morton wrote:

> If you replace the last line with
>
>       BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
>
> does it help?

Paul noticed that too so I came up with the patch below.

If it makes sense I can do the other architectures (I'm not sure == 0
is correct everywhere).  This is pretty much what I'm using now
without problems (it's either correct or it's almost correct and the
rwlock_is_write_locked hasn't thus far stuffed anything this boot).


---
Fix how we check for read and write rwlock_t locks.

Signed-off-by: Chris Wedgwood <[EMAIL PROTECTED]>

===== include/asm-i386/spinlock.h 1.16 vs edited =====
--- 1.16/include/asm-i386/spinlock.h    2005-01-07 21:43:58 -08:00
+++ edited/include/asm-i386/spinlock.h  2005-01-16 23:23:50 -08:00
@@ -187,6 +187,7 @@ typedef struct {
 #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
 
 #define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS)
+#define rwlock_is_write_locked(x) ((x)->lock == 0)
 
 /*
  * On x86, we implement read-write locks as a 32-bit counter
===== kernel/spinlock.c 1.4 vs edited =====
--- 1.4/kernel/spinlock.c       2005-01-14 16:00:00 -08:00
+++ edited/kernel/spinlock.c    2005-01-16 23:25:11 -08:00
@@ -247,8 +247,8 @@ EXPORT_SYMBOL(_##op##_lock_bh)
  *         _[spin|read|write]_lock_bh()
  */
 BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
-BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
-BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
+BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_write_locked);
+BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
 
 #endif /* CONFIG_PREEMPT */
 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to