On Tue, 2013-12-24 at 22:13 -0500, Waiman Long wrote:
> On 12/20/2013 08:36 PM, Davidlohr Bueso wrote:
> > On Thu, 2013-12-19 at 15:14 -0800, Linus Torvalds wrote:
> >> On Thu, Dec 19, 2013 at 10:45 AM, Davidlohr Bueso wrote:
> >>> - increment the counter at queue_lock() as we always end up callin
On 12/20/2013 08:36 PM, Davidlohr Bueso wrote:
On Thu, 2013-12-19 at 15:14 -0800, Linus Torvalds wrote:
On Thu, Dec 19, 2013 at 10:45 AM, Davidlohr Bueso wrote:
- increment the counter at queue_lock() as we always end up calling
queue_me() which adds the element to the list. Upon any error,
On Fri, 2013-12-20 at 11:54 -0800, Linus Torvalds wrote:
> On Fri, Dec 20, 2013 at 11:30 AM, Davidlohr Bueso wrote:
> >
> > So we'd need the barrier right after the ticket increment (ie: the xadd
> > TICKET_LOCK_INC in x86), and cannot rely on the barrier after the lock
> > is taken as we could mi
On Fri, 2013-12-20 at 18:34 -0800, Darren Hart wrote:
> On Fri, 2013-12-20 at 17:36 -0800, Davidlohr Bueso wrote:
> > On Thu, 2013-12-19 at 15:14 -0800, Linus Torvalds wrote:
> > > On Thu, Dec 19, 2013 at 10:45 AM, Davidlohr Bueso
> > > wrote:
> > > >
> > > > - increment the counter at queue_lock
On Fri, 2013-12-20 at 17:36 -0800, Davidlohr Bueso wrote:
> On Thu, 2013-12-19 at 15:14 -0800, Linus Torvalds wrote:
> > On Thu, Dec 19, 2013 at 10:45 AM, Davidlohr Bueso wrote:
> > >
> > > - increment the counter at queue_lock() as we always end up calling
> > > queue_me() which adds the elemen
On Thu, 2013-12-19 at 15:14 -0800, Linus Torvalds wrote:
> On Thu, Dec 19, 2013 at 10:45 AM, Davidlohr Bueso wrote:
> >
> > - increment the counter at queue_lock() as we always end up calling
> > queue_me() which adds the element to the list. Upon any error,
> > queue_unlock() is called for ho
On Fri, Dec 20, 2013 at 11:30 AM, Davidlohr Bueso wrote:
>
> So we'd need the barrier right after the ticket increment (ie: the xadd
> TICKET_LOCK_INC in x86), and cannot rely on the barrier after the lock
> is taken as we could miss waiters that are just spinning trying to take
> it. Is this impl
On Thu, 2013-12-19 at 15:14 -0800, Linus Torvalds wrote:
>
> So I still think this can be done without that new counter field, or
> any new atomics.
>
> hb_waiters_pending() could be something like
>
> spin_contended(hb->lock) || !list_empty(&hb->chain)
>
> which considering where you incre
* Peter Zijlstra wrote:
> On Thu, Dec 19, 2013 at 02:35:17PM -0800, Joe Perches wrote:
> > On Thu, 2013-12-19 at 20:42 +0100, Ingo Molnar wrote:
> > > * Davidlohr Bueso wrote:
> > >
> > > > On Thu, 2013-12-19 at 20:25 +0100, Ingo Molnar wrote:
> > > > > * Davidlohr Bueso wrote:
> > > > >
> >
On Thu, Dec 19, 2013 at 02:35:17PM -0800, Joe Perches wrote:
> On Thu, 2013-12-19 at 20:42 +0100, Ingo Molnar wrote:
> > * Davidlohr Bueso wrote:
> >
> > > On Thu, 2013-12-19 at 20:25 +0100, Ingo Molnar wrote:
> > > > * Davidlohr Bueso wrote:
> > > >
> > > > > Signed-off-by: Davidlohr Bueso
>
On Thu, Dec 19, 2013 at 07:13:09PM -0800, Linus Torvalds wrote:
> I think we might have to order the two reads with an smp_rmb - making
> sure that we check the lock state first - but I think it should be
> otherwise pretty solid.
> Yeah, I said "spin_contended()" myself initially, but it needs to
On Thu, 2013-12-19 at 19:13 -0800, Linus Torvalds wrote:
> On Thu, Dec 19, 2013 at 6:22 PM, Davidlohr Bueso wrote:
> >
> > Ok so when I replied I was thinking about the plist really and not the
> > hb->lock ticket counter. Yeah, what you propose does make sense for
> > waiters. However in wake pat
On Thu, Dec 19, 2013 at 6:22 PM, Davidlohr Bueso wrote:
>
> Ok so when I replied I was thinking about the plist really and not the
> hb->lock ticket counter. Yeah, what you propose does make sense for
> waiters. However in wake paths we have to decrement the counter nwake
> times (per each call t
On Thu, 2013-12-19 at 15:53 -0800, Linus Torvalds wrote:
> On Thu, Dec 19, 2013 at 3:42 PM, Davidlohr Bueso wrote:
> >
> >> And it can look at
> >> the wait-list for that - but you want to close the race between the
> >> entry actually getting added to the list using this counter. But the
> >> pla
On Thu, 2013-12-19 at 16:04 -0800, Linus Torvalds wrote:
> On Thu, Dec 19, 2013 at 3:53 PM, Linus Torvalds
> wrote:
> >
> > - in queue_lock(), immediately before getting the spinlock (which
> > will do the SAME ATOMIC INCREMENT, except it's just doing it on a
> > different member of the structure
On Thu, Dec 19, 2013 at 3:53 PM, Linus Torvalds
wrote:
>
> So how could we miss this? Explain to me what the separate counter
> does that isn't done by the spinlock head counter.
Hmm. Trying to answer this myself by looking at the code. And I just
realized that I described things wrong ("spin_con
On Thu, Dec 19, 2013 at 3:53 PM, Linus Torvalds
wrote:
>
> - in queue_lock(), immediately before getting the spinlock (which
> will do the SAME ATOMIC INCREMENT, except it's just doing it on a
> different member of the structure, namely the spinlock head)
Ok, so there's the "q->lock_ptr = &hb->l
On Thu, Dec 19, 2013 at 3:42 PM, Davidlohr Bueso wrote:
>
>> And it can look at
>> the wait-list for that - but you want to close the race between the
>> entry actually getting added to the list using this counter. But the
>> place you increment the new counter is the same place as you take the
>>
On Thu, 2013-12-19 at 15:14 -0800, Linus Torvalds wrote:
> On Thu, Dec 19, 2013 at 10:45 AM, Davidlohr Bueso wrote:
> >
> > - increment the counter at queue_lock() as we always end up calling
> > queue_me() which adds the element to the list. Upon any error,
> > queue_unlock() is called for ho
On Thu, Dec 19, 2013 at 10:45 AM, Davidlohr Bueso wrote:
>
> - increment the counter at queue_lock() as we always end up calling
> queue_me() which adds the element to the list. Upon any error,
> queue_unlock() is called for housekeeping, for which we decrement
> to mach the increment done i
On Thu, 2013-12-19 at 20:42 +0100, Ingo Molnar wrote:
> * Davidlohr Bueso wrote:
>
> > On Thu, 2013-12-19 at 20:25 +0100, Ingo Molnar wrote:
> > > * Davidlohr Bueso wrote:
> > >
> > > > Signed-off-by: Davidlohr Bueso
> > > > Signed-off-by: Waiman Long
> > > > Signed-off-by: Jason Low
> > >
* Davidlohr Bueso wrote:
> On Thu, 2013-12-19 at 20:25 +0100, Ingo Molnar wrote:
> > * Davidlohr Bueso wrote:
> >
> > > Signed-off-by: Davidlohr Bueso
> > > Signed-off-by: Waiman Long
> > > Signed-off-by: Jason Low
> >
> > So, that's not a valid SOB sequence either, the person sending me a
On Thu, 2013-12-19 at 20:25 +0100, Ingo Molnar wrote:
> * Davidlohr Bueso wrote:
>
> > Signed-off-by: Davidlohr Bueso
> > Signed-off-by: Waiman Long
> > Signed-off-by: Jason Low
>
> So, that's not a valid SOB sequence either, the person sending me a
> patch should be the last person in the S
* Davidlohr Bueso wrote:
> Signed-off-by: Davidlohr Bueso
> Signed-off-by: Waiman Long
> Signed-off-by: Jason Low
So, that's not a valid SOB sequence either, the person sending me a
patch should be the last person in the SOB chain - if you want to
credit them please add a Reviewed-by or ad
In futex_wake() there is clearly no point in taking the hb->lock if
we know beforehand that there are no tasks to be woken. This comes
at the smaller cost of doing some atomic operations to keep track of
the list's size. Specifically, increment the counter when an element is
added to the list, and
In futex_wake() there is clearly no point in taking the hb->lock if
we know beforehand that there are no tasks to be woken. This comes
at the smaller cost of doing some atomic operations to keep track of
the list's size. Specifically, increment the counter when an element is
added to the list, and
26 matches
Mail list logo