Linus,

The change below is causing major problems for me on a dual K7 with
CONFIG_PREEMPT enabled (cset -x and rebuilding makes the machine
usable again).

This change was merged a couple of days ago so I'm surprised nobody
else has reported this.  I tried to find where this patch came from
but I don't see it on lkml only the bk history.

Note, even with this removed I'm still seeing a few (many actually)
"BUG: using smp_processor_id() in preemptible [00000001] code: xxx"
messages which I've not seen before --- that might be unrelated but I
do see *many* such messages so I'm sure I would have noticed this
before or it would have broken something earlier.

Is this specific to the k7 or do other people also see this?



Thanks,

  --cw

---

# This is a BitKeeper generated diff -Nru style patch.
#
# ChangeSet
#   2005/01/15 09:40:45-08:00 [EMAIL PROTECTED] 
#   [PATCH] Don't busy-lock-loop in preemptable spinlocks
#   
#   Paul Mackerras points out that doing the _raw_spin_trylock each time
#   through the loop will generate tons of unnecessary bus traffic.
#   Instead, after we fail to get the lock we should poll it with simple
#   loads until we see that it is clear and then retry the atomic op. 
#   Assuming a reasonable cache design, the loads won't generate any bus
#   traffic until another cpu writes to the cacheline containing the lock.
#   
#   Agreed.
#   
#   Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
#   Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
# 
# kernel/spinlock.c
#   2005/01/14 16:00:00-08:00 [EMAIL PROTECTED] +8 -6
#   Don't busy-lock-loop in preemptable spinlocks
# 
diff -Nru a/kernel/spinlock.c b/kernel/spinlock.c
--- a/kernel/spinlock.c 2005-01-16 21:43:15 -08:00
+++ b/kernel/spinlock.c 2005-01-16 21:43:15 -08:00
@@ -173,7 +173,7 @@
  * (We do this in a function because inlining it would be excessive.)
  */
 
-#define BUILD_LOCK_OPS(op, locktype)                                   \
+#define BUILD_LOCK_OPS(op, locktype, is_locked_fn)                     \
 void __lockfunc _##op##_lock(locktype *lock)                           \
 {                                                                      \
        preempt_disable();                                              \
@@ -183,7 +183,8 @@
                preempt_enable();                                       \
                if (!(lock)->break_lock)                                \
                        (lock)->break_lock = 1;                         \
-               cpu_relax();                                            \
+               while (is_locked_fn(lock) && (lock)->break_lock)        \
+                       cpu_relax();                                    \
                preempt_disable();                                      \
        }                                                               \
 }                                                                      \
@@ -204,7 +205,8 @@
                preempt_enable();                                       \
                if (!(lock)->break_lock)                                \
                        (lock)->break_lock = 1;                         \
-               cpu_relax();                                            \
+               while (is_locked_fn(lock) && (lock)->break_lock)        \
+                       cpu_relax();                                    \
                preempt_disable();                                      \
        }                                                               \
        return flags;                                                   \
@@ -244,9 +246,9 @@
  *         _[spin|read|write]_lock_irqsave()
  *         _[spin|read|write]_lock_bh()
  */
-BUILD_LOCK_OPS(spin, spinlock_t);
-BUILD_LOCK_OPS(read, rwlock_t);
-BUILD_LOCK_OPS(write, rwlock_t);
+BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
+BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
+BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
 
 #endif /* CONFIG_PREEMPT */
 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to