Excerpts from Peter Zijlstra's message of 2011-03-30 07:52:04 -0400:
> On Wed, 2011-03-30 at 07:46 -0400, Chris Mason wrote:
> >
> > In this case, the only thing we're really missing is a way to mutex_lock
> > without the cond_resched()
>
> So you're trying to explicitly avoid a voluntary preemp
On Wed, 2011-03-30 at 07:46 -0400, Chris Mason wrote:
>
> In this case, the only thing we're really missing is a way to mutex_lock
> without the cond_resched()
So you're trying to explicitly avoid a voluntary preemption point? Seems
like a bad idea, normally people add those :-)
--
To unsubscrib
Excerpts from Tejun Heo's message of 2011-03-29 12:37:02 -0400:
> Hello, guys.
>
> I've been running dbench 50 for a few days now and the result is,
> well, I don't know how to call it.
>
> The problem was that the original patch didn't do anything because x86
> fastpath code didn't call into the
On Wed, 2011-03-30 at 10:17 +0200, Tejun Heo wrote:
> Hey, Peter.
>
> On Tue, Mar 29, 2011 at 07:37:33PM +0200, Peter Zijlstra wrote:
> > On Tue, 2011-03-29 at 19:09 +0200, Tejun Heo wrote:
> > > Here's the combined patch I was planning on testing but didn't get to
> > > (yet). It implements two
Hey, Peter.
On Tue, Mar 29, 2011 at 07:37:33PM +0200, Peter Zijlstra wrote:
> On Tue, 2011-03-29 at 19:09 +0200, Tejun Heo wrote:
> > Here's the combined patch I was planning on testing but didn't get to
> > (yet). It implements two things - hard limit on spin duration and
> > early break if the
On Tue, 2011-03-29 at 19:09 +0200, Tejun Heo wrote:
> Here's the combined patch I was planning on testing but didn't get to
> (yet). It implements two things - hard limit on spin duration and
> early break if the owner also is spinning on a mutex.
This is going to give massive conflicts with
ht
Here's the combined patch I was planning on testing but didn't get to
(yet). It implements two things - hard limit on spin duration and
early break if the owner also is spinning on a mutex.
Thanks.
Index: work1/include/linux/sched.h
===
Hello, guys.
I've been running dbench 50 for a few days now and the result is,
well, I don't know how to call it.
The problem was that the original patch didn't do anything because x86
fastpath code didn't call into the generic slowpath at all.
static inline int __mutex_fastpath_trylock(atomic
* Steven Rostedt wrote:
> On Fri, 2011-03-25 at 16:50 +0300, Andrey Kuzmin wrote:
> > On Fri, Mar 25, 2011 at 4:12 PM, Steven Rostedt wrote:
> > > On Fri, 2011-03-25 at 14:13 +0300, Andrey Kuzmin wrote:
> > >> Turning try_lock into indefinitely spinning one breaks its semantics,
> > >> so deadl
On Fri, 2011-03-25 at 16:50 +0300, Andrey Kuzmin wrote:
> On Fri, Mar 25, 2011 at 4:12 PM, Steven Rostedt wrote:
> > On Fri, 2011-03-25 at 14:13 +0300, Andrey Kuzmin wrote:
> >> Turning try_lock into indefinitely spinning one breaks its semantics,
> >> so deadlock is to be expected. But what's wro
On Fri, Mar 25, 2011 at 4:12 PM, Steven Rostedt wrote:
> On Fri, 2011-03-25 at 14:13 +0300, Andrey Kuzmin wrote:
>> Turning try_lock into indefinitely spinning one breaks its semantics,
>> so deadlock is to be expected. But what's wrong in this scenario if
>> try_lock spins a bit before giving up?
On Fri, 2011-03-25 at 09:10 -0400, Steven Rostedt wrote:
> One solution is to have this be only done on explicit trylocks. Perhaps
> introduce a mutex_trylock_spin()? Then when the developer knows that
> this scenario does not exist, they can convert mutex_trylocks() into
> this spinning version.
On Fri, 2011-03-25 at 14:13 +0300, Andrey Kuzmin wrote:
> Turning try_lock into indefinitely spinning one breaks its semantics,
> so deadlock is to be expected. But what's wrong in this scenario if
> try_lock spins a bit before giving up?
Because that will cause this scenario to spin that "little
On Fri, 2011-03-25 at 07:53 +0100, Tejun Heo wrote:
> Hello, Steven, Linus.
>
> On Thu, Mar 24, 2011 at 09:38:58PM -0700, Linus Torvalds wrote:
> > On Thu, Mar 24, 2011 at 8:39 PM, Steven Rostedt wrote:
> > >
> > > But now, mutex_trylock(B) becomes a spinner too, and since the B's owner
> > > is
On Fri, Mar 25, 2011 at 6:39 AM, Steven Rostedt wrote:
> On Thu, Mar 24, 2011 at 10:41:51AM +0100, Tejun Heo wrote:
>> Adaptive owner spinning used to be applied only to mutex_lock(). This
>> patch applies it also to mutex_trylock().
>>
>> btrfs has developed custom locking to avoid excessive con
* Tejun Heo wrote:
> Hello,
>
> On Thu, Mar 24, 2011 at 10:41:51AM +0100, Tejun Heo wrote:
> > USER SYSTEM SIRQCXTSW THROUGHPUT
> > SIMPLE 61107 354977217 8099529 845.100 MB/sec
> > SPIN 63140 364888214 6840527 879.077 MB/sec
> >
> > On various runs, the adap
Hello,
On Thu, Mar 24, 2011 at 10:41:51AM +0100, Tejun Heo wrote:
> USER SYSTEM SIRQCXTSW THROUGHPUT
> SIMPLE 61107 354977217 8099529 845.100 MB/sec
> SPIN 63140 364888214 6840527 879.077 MB/sec
>
> On various runs, the adaptive spinning trylock consistently pos
Hello, Steven, Linus.
On Thu, Mar 24, 2011 at 09:38:58PM -0700, Linus Torvalds wrote:
> On Thu, Mar 24, 2011 at 8:39 PM, Steven Rostedt wrote:
> >
> > But now, mutex_trylock(B) becomes a spinner too, and since the B's owner
> > is running (spinning on A) it will spin as well waiting for A's owner
On Thu, Mar 24, 2011 at 8:39 PM, Steven Rostedt wrote:
>
> But now, mutex_trylock(B) becomes a spinner too, and since the B's owner
> is running (spinning on A) it will spin as well waiting for A's owner to
> release it. Unfortunately, A's owner is also spinning waiting for B to
> release it.
>
>
On Thu, Mar 24, 2011 at 10:41:51AM +0100, Tejun Heo wrote:
> Adaptive owner spinning used to be applied only to mutex_lock(). This
> patch applies it also to mutex_trylock().
>
> btrfs has developed custom locking to avoid excessive context switches
> in its btree implementation. Generally, doin
20 matches
Mail list logo