On 06/14/2013 09:26 PM, Benjamin Herrenschmidt wrote:
On Fri, 2013-06-14 at 14:17 -0400, Waiman Long wrote:
With some minor changes, the current patch can be modified to support
debugging lock for 32-bit system. For 64-bit system, we can apply a
similar concept for debugging lock with cmpxchg_do
On Fri, 2013-06-14 at 14:17 -0400, Waiman Long wrote:
>
> With some minor changes, the current patch can be modified to support
> debugging lock for 32-bit system. For 64-bit system, we can apply a
> similar concept for debugging lock with cmpxchg_double. However, for
> architecture that does n
On 06/14/2013 11:37 AM, Linus Torvalds wrote:
On Fri, Jun 14, 2013 at 8:00 AM, Waiman Long wrote:
On 06/12/2013 08:59 PM, Linus Torvalds wrote:
Ho humm.. interesting. I was talking about wanting to mix atomics and
spinlocks earlier in this thread due to space constraints, and it
strikes me tha
On Fri, Jun 14, 2013 at 8:00 AM, Waiman Long wrote:
> On 06/12/2013 08:59 PM, Linus Torvalds wrote:
>>
>> Ho humm.. interesting. I was talking about wanting to mix atomics and
>> spinlocks earlier in this thread due to space constraints, and it
>> strikes me that that would actually help this case
On 06/12/2013 08:59 PM, Linus Torvalds wrote:
On Wed, Jun 12, 2013 at 5:49 PM, Al Viro wrote:
On Wed, Jun 12, 2013 at 05:38:13PM -0700, Linus Torvalds wrote:
For the particular case of dget_parent() maybe dget_parent() should
just double-check the original dentry->d_parent pointer after gettin
On Wed, Jun 12, 2013 at 5:49 PM, Al Viro wrote:
> On Wed, Jun 12, 2013 at 05:38:13PM -0700, Linus Torvalds wrote:
>>
>> For the particular case of dget_parent() maybe dget_parent() should
>> just double-check the original dentry->d_parent pointer after getting
>> the refcount on it (and if the par
On Wed, Jun 12, 2013 at 05:38:13PM -0700, Linus Torvalds wrote:
> On Wed, Jun 12, 2013 at 5:20 PM, Al Viro wrote:
> >
> > Actually, dget_parent() change might be broken; the thing is, the
> > assumptions
> > are more subtle than "zero -> non-zero only happens under ->d_lock". It's
> > actually "
On Wed, Jun 12, 2013 at 5:20 PM, Al Viro wrote:
>
> Actually, dget_parent() change might be broken; the thing is, the assumptions
> are more subtle than "zero -> non-zero only happens under ->d_lock". It's
> actually "new references are grabbed by somebody who's either already holding
> one on th
On Wed, Jun 12, 2013 at 05:01:19PM -0700, Linus Torvalds wrote:
> I'd actually suggest we do *not* remove any existing d_lock usage
> outside of the particular special cases we want to optimize, which at
> least from Davidlohr's profile is just dput() (which has shown up a
> lot before) and dget_pa
On Wed, Jun 12, 2013 at 4:32 PM, Al Viro wrote:
> On Wed, Jun 12, 2013 at 01:26:25PM -0700, Linus Torvalds wrote:
>>
>> However, one optimization missing from your patch is obvious in the
>> profile. "dget_parent()" also needs to be optimized - you still have
>> that as 99% of the spin-lock case.
On Wed, Jun 12, 2013 at 01:26:25PM -0700, Linus Torvalds wrote:
> For similar reasons, I think you need to still maintain the d_lock in
> d_prune_aliases etc. That's a slow-path, so the fact that we add an
> atomic sequence there doesn't much matter.
>
> However, one optimization missing from you
On Wed, 2013-06-12 at 13:26 -0700, Linus Torvalds wrote:
> On Wed, Jun 12, 2013 at 1:03 PM, Davidlohr Bueso
> wrote:
> >
> > According to him:
> >
> > "the short workload calls security functions like getpwnam(),
> > getpwuid(), getgrgid() a couple of times. These functions open
> > the /etc/pass
On Wed, 2013-06-12 at 13:26 -0700, Linus Torvalds wrote:
> On Wed, Jun 12, 2013 at 1:03 PM, Davidlohr Bueso
> wrote:
> >
> > According to him:
> >
> > "the short workload calls security functions like getpwnam(),
> > getpwuid(), getgrgid() a couple of times. These functions open
> > the /etc/pass
On Wed, Jun 12, 2013 at 1:03 PM, Davidlohr Bueso wrote:
>
> Waiman's dcache patchet were actually an attempt to address these exact
> issues: http://lkml.org/lkml/2013/5/22/716
Ok, looking at that patch-set, I think it has the same race with not
atomically getting the d_lock spinlock and d_count
On Wed, Jun 12, 2013 at 1:03 PM, Davidlohr Bueso wrote:
>
> According to him:
>
> "the short workload calls security functions like getpwnam(),
> getpwuid(), getgrgid() a couple of times. These functions open
> the /etc/passwd or /etc/group files, read their content and close the
> files.
Ahh, ok
On Wed, 2013-06-12 at 11:15 -0700, Linus Torvalds wrote:
> On Wed, Jun 12, 2013 at 10:50 AM, Davidlohr Bueso
> wrote:
> >
> > * short: is the big winner for this patch, +69% throughput improvement
> > with 100-2000 users. This makes a lot of sense since the workload spends
> > a ridiculous amount
On Wed, 2013-06-12 at 10:50 -0700, Davidlohr Bueso wrote:
> On Tue, 2013-06-11 at 14:10 -0400, Steven Rostedt wrote:
> > Perhaps short work loads have a cold cache, and the impact on cache is
> > not as drastic?
> >
> > It would be interesting to see what perf reports on these runs.
>
> After run
On Wed, Jun 12, 2013 at 10:50 AM, Davidlohr Bueso
wrote:
>
> * short: is the big winner for this patch, +69% throughput improvement
> with 100-2000 users. This makes a lot of sense since the workload spends
> a ridiculous amount of time trying to acquire the d_lock:
>
> 84.86%1569902
On Tue, 2013-06-11 at 14:10 -0400, Steven Rostedt wrote:
> Perhaps short work loads have a cold cache, and the impact on cache is
> not as drastic?
>
> It would be interesting to see what perf reports on these runs.
After running the aim7 workloads on Paul's v3 patch (same 80 core, 8
socket box -
On Wed, Jun 12, 2013 at 10:15:49PM +0800, Lai Jiangshan wrote:
> Hi, Paul
>
> I have some question about smp_mb().(searching smp_mb() can find all
> my question)
>
> Thanks,
> Lai
>
> On Wed, Jun 12, 2013 at 3:49 AM, Paul E. McKenney
> wrote:
> > On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman
On Wed, Jun 12, 2013 at 07:06:53PM +0800, Lai Jiangshan wrote:
> On Wed, Jun 12, 2013 at 9:58 AM, Steven Rostedt wrote:
> > On Wed, 2013-06-12 at 09:19 +0800, Lai Jiangshan wrote:
> >
> >> > +
> >> > +/*
> >> > + * Hand the lock off to the first CPU on the queue.
> >> > + */
> >> > +void tkt_q_do_
Hi, Paul
I have some question about smp_mb().(searching smp_mb() can find all
my question)
Thanks,
Lai
On Wed, Jun 12, 2013 at 3:49 AM, Paul E. McKenney
wrote:
> On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
>> On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
>> >
>> >>I am a bit
On Wed, Jun 12, 2013 at 9:58 AM, Steven Rostedt wrote:
> On Wed, 2013-06-12 at 09:19 +0800, Lai Jiangshan wrote:
>
>> > +
>> > +/*
>> > + * Hand the lock off to the first CPU on the queue.
>> > + */
>> > +void tkt_q_do_wake(arch_spinlock_t *lock)
>> > +{
>> > + struct tkt_q_head *tqhp;
>> >
On Tue, Jun 11, 2013 at 09:58:08PM -0400, Steven Rostedt wrote:
> On Wed, 2013-06-12 at 09:19 +0800, Lai Jiangshan wrote:
>
> > > +
> > > +/*
> > > + * Hand the lock off to the first CPU on the queue.
> > > + */
> > > +void tkt_q_do_wake(arch_spinlock_t *lock)
> > > +{
> > > + struct tkt_q_h
On Wed, 2013-06-12 at 09:19 +0800, Lai Jiangshan wrote:
> > +
> > +/*
> > + * Hand the lock off to the first CPU on the queue.
> > + */
> > +void tkt_q_do_wake(arch_spinlock_t *lock)
> > +{
> > + struct tkt_q_head *tqhp;
> > + struct tkt_q *tqp;
> > +
> > + /* If the queue is sti
On Wed, Jun 12, 2013 at 3:49 AM, Paul E. McKenney
wrote:
> On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
>> On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
>> >
>> >>I am a bit concern about the size of the head queue table itself.
>> >>RHEL6, for example, had defined CONFIG_NR_CPUS
On Tue, Jun 11, 2013 at 04:56:50PM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
>
> > +config TICKET_LOCK_QUEUED
> > + bool "Dynamically switch between ticket and queued locking"
> > + depends on SMP
> > + default n
> > + ---help---
> > + En
On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
> +config TICKET_LOCK_QUEUED
> + bool "Dynamically switch between ticket and queued locking"
> + depends on SMP
> + default n
> + ---help---
> + Enable dynamic switching between ticketlock and queued locking
> +
On Tue, 2013-06-11 at 13:32 -0700, Paul E. McKenney wrote:
> /*
>* This lock has lots of spinners, but no queue. Go create
>* a queue to spin on.
>*
>* In the common case, only the single task that
>* se
On Tue, Jun 11, 2013 at 01:25:15PM -0700, Jason Low wrote:
> On Tue, Jun 11, 2013 at 12:49 PM, Paul E. McKenney
> wrote:
> > On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
> >> On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
> >> >
> >> >>I am a bit concern about the size of the head
On Tue, Jun 11, 2013 at 04:09:56PM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
>
> > +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc)
> > +{
> > + if (unlikely(inc.head & 0x1)) {
> > +
> > + /* This lock has a queue, so go
On Tue, Jun 11, 2013 at 12:49 PM, Paul E. McKenney
wrote:
> On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
>> On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
>> >
>> >>I am a bit concern about the size of the head queue table itself.
>> >>RHEL6, for example, had defined CONFIG_NR_CPU
On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
> +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc)
> +{
> + if (unlikely(inc.head & 0x1)) {
> +
> + /* This lock has a queue, so go spin on the queue. */
> + if (tkt_q_do_spin(ap, inc))
> +
On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
> On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
> >
> >>I am a bit concern about the size of the head queue table itself.
> >>RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
> >>a table size of 256. Maybe it is bett
On Tue, 2013-06-11 at 14:41 -0400, Waiman Long wrote:
> On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
> >
> >> I am a bit concern about the size of the head queue table itself.
> >> RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
> >> a table size of 256. Maybe it is better t
On Tue, Jun 11, 2013 at 02:10:31PM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 10:53 -0700, Davidlohr Bueso wrote:
>
> > I hate to be the bearer of bad news but I got some pretty bad aim7
> > performance numbers with this patch on an 8-socket (80 core) 256 Gb
> > memory DL980 box against
On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
a table size of 256. Maybe it is better to dynamically allocate the
table at init time depending on the actual n
On Tue, 2013-06-11 at 14:10 -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 10:53 -0700, Davidlohr Bueso wrote:
>
> > I hate to be the bearer of bad news but I got some pretty bad aim7
> > performance numbers with this patch on an 8-socket (80 core) 256 Gb
> > memory DL980 box against a vanil
On Tue, 2013-06-11 at 10:53 -0700, Davidlohr Bueso wrote:
> I hate to be the bearer of bad news but I got some pretty bad aim7
> performance numbers with this patch on an 8-socket (80 core) 256 Gb
> memory DL980 box against a vanilla 3.10-rc4 kernel:
This doesn't surprise me as the spin lock now
On Tue, Jun 11, 2013 at 10:53:06AM -0700, Davidlohr Bueso wrote:
> On Mon, 2013-06-10 at 17:51 -0700, Linus Torvalds wrote:
> > On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt wrote:
> > >
> > > OK, I haven't found a issue here yet, but youss are beiing trickssy! We
> > > don't like trickssy, and
On Mon, 2013-06-10 at 17:51 -0700, Linus Torvalds wrote:
> On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt wrote:
> >
> > OK, I haven't found a issue here yet, but youss are beiing trickssy! We
> > don't like trickssy, and we must find preiouss!!!
>
> .. and I personally have my usual reserva
On Tue, Jun 11, 2013 at 01:13:53PM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 09:43 -0700, Paul E. McKenney wrote:
>
> > > > I am a bit concern about the size of the head queue table itself.
> > > > RHEL6,
> > > > for example, had defined CONFIG_NR_CPUS to be 4096 which mean a table
>
On 06/11/2013 12:20 PM, Steven Rostedt wrote:
diff --git a/arch/x86/include/asm/spinlock_types.h
b/arch/x86/include/asm/spinlock_types.h
index ad0ad07..cdaefdd 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -7,12 +7,18 @@
#include
-#if (CON
On Tue, Jun 11, 2013 at 10:17:52AM -0700, Linus Torvalds wrote:
> On Tue, Jun 11, 2013 at 9:48 AM, Paul E. McKenney
> wrote:
> >
> > Another approach is to permanently associate queues with each lock,
> > but that increases the size of the lock -- something that has raised
> > concerns in the past
On Tue, Jun 11, 2013 at 01:01:55PM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 09:36 -0700, Paul E. McKenney wrote:
>
> > > I am a bit concern about the size of the head queue table itself.
> > > RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
> > > a table size of 25
On Tue, Jun 11, 2013 at 9:48 AM, Paul E. McKenney
wrote:
>
> Another approach is to permanently associate queues with each lock,
> but that increases the size of the lock -- something that has raised
> concerns in the past. But if adding 32 bytes to each ticketlock was OK,
> this simplifies thing
On Tue, 2013-06-11 at 09:43 -0700, Paul E. McKenney wrote:
> > > I am a bit concern about the size of the head queue table itself. RHEL6,
> > > for example, had defined CONFIG_NR_CPUS to be 4096 which mean a table
> > > size of 256. Maybe it is better to dynamically allocate the table at
> > >
On Tue, 2013-06-11 at 09:36 -0700, Paul E. McKenney wrote:
> > I am a bit concern about the size of the head queue table itself.
> > RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
> > a table size of 256. Maybe it is better to dynamically allocate the
> > table at init time d
On Tue, Jun 11, 2013 at 11:10:30PM +0800, Lai Jiangshan wrote:
> On Tue, Jun 11, 2013 at 10:48 PM, Lai Jiangshan wrote:
> > On Mon, Jun 10, 2013 at 3:36 AM, Paul E. McKenney
> > wrote:
> >> Breaking up locks is better than implementing high-contention locks, but
> >> if we must have high-contenti
On Tue, Jun 11, 2013 at 11:22:45AM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 03:14 -0700, Paul E. McKenney wrote:
> >
> > > Off-topic, although I am in this community for several years,
> > > I am not exactly clear with this problem.
> > >
> > > 1) In general case, which lock is the m
On Tue, Jun 11, 2013 at 11:57:14AM -0400, Waiman Long wrote:
> On 06/09/2013 03:36 PM, Paul E. McKenney wrote:
> >Breaking up locks is better than implementing high-contention locks, but
> >if we must have high-contention locks, why not make them automatically
> >switch between light-weight ticket
On Tue, Jun 11, 2013 at 12:20:32PM -0400, Steven Rostedt wrote:
> On Tue, 2013-06-11 at 11:57 -0400, Waiman Long wrote:
>
> > This is an interesting patch and I think it is useful for workloads that
> > run on systems with a large number of CPUs.
>
> I would say it is definitely a fun academic p
On Tue, Jun 11, 2013 at 10:48:17PM +0800, Lai Jiangshan wrote:
> On Mon, Jun 10, 2013 at 3:36 AM, Paul E. McKenney
> wrote:
> > Breaking up locks is better than implementing high-contention locks, but
> > if we must have high-contention locks, why not make them automatically
> > switch between lig
On Tue, 2013-06-11 at 11:57 -0400, Waiman Long wrote:
> This is an interesting patch and I think it is useful for workloads that
> run on systems with a large number of CPUs.
I would say it is definitely a fun academic patch, now if it is
something for a production environment remains to be seen
On 06/09/2013 03:36 PM, Paul E. McKenney wrote:
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between light-weight ticket locks at low contention and queued
locks at high contention?
This com
On Tue, 2013-06-11 at 03:14 -0700, Paul E. McKenney wrote:
>
> > Off-topic, although I am in this community for several years,
> > I am not exactly clear with this problem.
> >
> > 1) In general case, which lock is the most competitive in the kernel? what
> > it protects for?
> > 2) In which sp
On Tue, Jun 11, 2013 at 02:56:55AM -0700, Paul E. McKenney wrote:
> On Mon, Jun 10, 2013 at 08:44:40PM -0400, Steven Rostedt wrote:
> > On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
[ . . . ]
> > > +bool tkt_q_do_spin(arch_spinlock_t *asp, struct __raw_tickets inc)
> > > +{
> > > + s
On Tue, Jun 11, 2013 at 10:48 PM, Lai Jiangshan wrote:
> On Mon, Jun 10, 2013 at 3:36 AM, Paul E. McKenney
> wrote:
>> Breaking up locks is better than implementing high-contention locks, but
>> if we must have high-contention locks, why not make them automatically
>> switch between light-weight
On Mon, Jun 10, 2013 at 3:36 AM, Paul E. McKenney
wrote:
> Breaking up locks is better than implementing high-contention locks, but
> if we must have high-contention locks, why not make them automatically
> switch between light-weight ticket locks at low contention and queued
> locks at high conte
On Tue, Jun 11, 2013 at 03:53:17PM +0800, Lai Jiangshan wrote:
> On 06/11/2013 08:51 AM, Linus Torvalds wrote:
> > On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt wrote:
> >>
> >> OK, I haven't found a issue here yet, but youss are beiing trickssy! We
> >> don't like trickssy, and we must find pre
On Mon, Jun 10, 2013 at 05:51:14PM -0700, Linus Torvalds wrote:
> On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt wrote:
> >
> > OK, I haven't found a issue here yet, but youss are beiing trickssy! We
> > don't like trickssy, and we must find preiouss!!!
Heh! You should see what it looks lik
On Mon, Jun 10, 2013 at 08:44:40PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
>
> > +#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
> > +
> > +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> > +
> > +static __always_inline void __t
On Mon, Jun 10, 2013 at 09:04:09PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> > Breaking up locks is better than implementing high-contention locks, but
> > if we must have high-contention locks, why not make them automatically
> > switch between lig
On 06/11/2013 08:51 AM, Linus Torvalds wrote:
> On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt wrote:
>>
>> OK, I haven't found a issue here yet, but youss are beiing trickssy! We
>> don't like trickssy, and we must find preiouss!!!
>
> .. and I personally have my usual reservations. I absol
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> Breaking up locks is better than implementing high-contention locks, but
> if we must have high-contention locks, why not make them automatically
> switch between light-weight ticket locks at low contention and queued
> locks at high cont
On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt wrote:
>
> OK, I haven't found a issue here yet, but youss are beiing trickssy! We
> don't like trickssy, and we must find preiouss!!!
.. and I personally have my usual reservations. I absolutely hate
papering over scalability issues, and histor
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> +#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
> +
> +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> +
> +static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> +{
> + register struct __raw_tick
On Mon, Jun 10, 2013 at 07:02:56PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
>
> > +/*
> > + * Return a pointer to the queue header associated with the specified lock,
> > + * or return NULL if there is no queue for the lock or if the lock's queue
>
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> +/*
> + * Return a pointer to the queue header associated with the specified lock,
> + * or return NULL if there is no queue for the lock or if the lock's queue
> + * is in transition.
> + */
> +static struct tkt_q_head *tkt_q_find_head(
On Mon, Jun 10, 2013 at 02:35:06PM -0700, Eric Dumazet wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> > Breaking up locks is better than implementing high-contention locks, but
> > if we must have high-contention locks, why not make them automatically
> > switch between light
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> Breaking up locks is better than implementing high-contention locks, but
> if we must have high-contention locks, why not make them automatically
> switch between light-weight ticket locks at low contention and queued
> locks at high cont
On Mon, Jun 10, 2013 at 05:08:25PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> >
> > +#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
> > +
> > +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> > +
> > +static __always_inline void
On Mon, Jun 10, 2013 at 11:01:50PM +0200, Thomas Gleixner wrote:
> On Mon, 10 Jun 2013, Steven Rostedt wrote:
> > On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> > > +#ifdef __BIG_ENDIAN__
> >
> > Is there such a thing as a BIG_ENDIAN x86 box? This is in
> > arch/x86/include/asm/spinl
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
>
> +#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
> +
> +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> +
> +static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> +{
> + register struct __raw_t
On Mon, 10 Jun 2013, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> > +#ifdef __BIG_ENDIAN__
>
> Is there such a thing as a BIG_ENDIAN x86 box? This is in
> arch/x86/include/asm/spinlock_types.h
That's just an habit for people who have been forced to deal wit
On Mon, Jun 10, 2013 at 04:47:58PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
>
> > --- a/arch/x86/include/asm/spinlock_types.h
> > +++ b/arch/x86/include/asm/spinlock_types.h
> > @@ -7,12 +7,18 @@
> >
> > #include
> >
> > -#if (CONFIG_NR_CPUS <
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -7,12 +7,18 @@
>
> #include
>
> -#if (CONFIG_NR_CPUS < 256)
> +#if (CONFIG_NR_CPUS < 128)
> typedef u8 __ticket_t;
> typedef u16 __ti
77 matches
Mail list logo