Ingo Molnar wrote:
* Arjan van de Ven <[EMAIL PROTECTED]> wrote:
as I said, since the cacheline just got dirtied, the write is just
half a cycle which is so much in the noise that it really doesn't
matter.
ok - the patch below is a small modification of Hugh's so that we clear
->break_lock uncond
* Arjan van de Ven <[EMAIL PROTECTED]> wrote:
> as I said, since the cacheline just got dirtied, the write is just
> half a cycle which is so much in the noise that it really doesn't
> matter.
ok - the patch below is a small modification of Hugh's so that we clear
->break_lock unconditionally. S
Arjan van de Ven wrote:
Yes that's the tradeoff. I just feel that the former may be better,
especially because the latter can be timing dependant (so you may get
things randomly "happening"), and the former is apparently very low
overhead compared with the cost of taking the lock. Any numbers,
anyo
>
> Yes that's the tradeoff. I just feel that the former may be better,
> especially because the latter can be timing dependant (so you may get
> things randomly "happening"), and the former is apparently very low
> overhead compared with the cost of taking the lock. Any numbers,
> anyone?
as I
Ingo Molnar wrote:
* Nick Piggin <[EMAIL PROTECTED]> wrote:
while writing the ->break_lock feature i intentionally avoided
overhead in the spinlock fastpath. A better solution for the bug you
noticed is to clear the break_lock flag in places that use
need_lock_break() explicitly.
What happens if b
* Nick Piggin <[EMAIL PROTECTED]> wrote:
> > while writing the ->break_lock feature i intentionally avoided
> > overhead in the spinlock fastpath. A better solution for the bug you
> > noticed is to clear the break_lock flag in places that use
> > need_lock_break() explicitly.
>
> What happens i
Ingo Molnar wrote:
* Hugh Dickins <[EMAIL PROTECTED]> wrote:
@@ -187,6 +187,8 @@ void __lockfunc _##op##_lock(locktype##_
cpu_relax();\
preempt_disable(); \
}
* Hugh Dickins <[EMAIL PROTECTED]> wrote:
> @@ -187,6 +187,8 @@ void __lockfunc _##op##_lock(locktype##_
> cpu_relax();\
> preempt_disable(); \
> }
On Sat, 2005-03-12 at 23:20 +, Hugh Dickins wrote:
> Since cond_resched_lock's spin_lock clears break_lock, no need to clear it
> itself; and use need_lockbreak there too, preferring optimizer to #ifdefs.
>
FWIW, this patch solves the problems I had in mind (and so should
solve our copy_page
On Sun, 2005-03-13 at 09:35 +, Hugh Dickins wrote:
> On Sun, 13 Mar 2005, Arjan van de Ven wrote:
> > > \
> > > + if ((lock)->break_lock) \
> > > + (lock)->break_lock = 0;
On Sun, 13 Mar 2005, Arjan van de Ven wrote:
> > \
> > + if ((lock)->break_lock) \
> > + (lock)->break_lock = 0; \
> > }
> \
> + if ((lock)->break_lock) \
> + (lock)->break_lock = 0; \
> }\
if it really worth
On Fri, 11 Mar 2005, Andrew Morton wrote:
>
> This patch causes a CONFIG_PREEMPT=y, CONFIG_PREEMPT_BKL=y,
> CONFIG_DEBUG_PREEMPT=y kernel on a ppc64 G5 to hang immediately after
> displaying the penguins, but apparently not before having set the hardware
> clock backwards 101 years.
>
> After hav
Hugh Dickins <[EMAIL PROTECTED]> wrote:
>
> lock->break_lock is set when a lock is contended, but cleared only in
> cond_resched_lock. Users of need_lockbreak (journal_commit_transaction,
> copy_pte_range, unmap_vmas) don't necessarily use cond_resched_lock on it.
>
> So, if the lock has been
lock->break_lock is set when a lock is contended, but cleared only in
cond_resched_lock. Users of need_lockbreak (journal_commit_transaction,
copy_pte_range, unmap_vmas) don't necessarily use cond_resched_lock on it.
So, if the lock has been contended at some time in the past, break_lock
remains
15 matches
Mail list logo