Ingo Molnar wrote:
* Hugh Dickins <[EMAIL PROTECTED]> wrote:


@@ -187,6 +187,8 @@ void __lockfunc _##op##_lock(locktype##_
                        cpu_relax();                                    \
                preempt_disable();                                      \
        }                                                               \
+       if ((lock)->break_lock)                                              \
+               (lock)->break_lock = 0;                                      \


while writing the ->break_lock feature i intentionally avoided overhead
in the spinlock fastpath. A better solution for the bug you noticed is
to clear the break_lock flag in places that use need_lock_break()
explicitly.


What happens if break_lock gets set by random contention on the lock somewhere (with no need_lock_break or cond_resched_lock)? Next time it goes through a lockbreak will (may) be a false positive.

I think I'd prefer the additional lock overhead of Hugh's patch if
it gives better behaviour. I think. Any idea what the overhead actually
is?

One robust way for that seems to be to make the need_lock_break() macro
clear the flag if it sees it set, and to make all the other (internal)
users use __need_lock_break() that doesnt clear the flag. I'll cook up a
patch for this.


If you do this exactly as you describe, then you'll break cond_resched_lock (eg. for the copy_page_range path), won't you?


- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

Reply via email to