Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-19 Thread Grant Grundler
On Thu, Jan 20, 2005 at 08:43:30AM +1100, Paul Mackerras wrote:
> I suggest read_poll(), write_poll(), spin_poll(),...

Erm...those names sound way too much like existing interfaces.
Perhaps check_read_lock()/check_write_lock() ?

If they don't get used too much, the longer name should be ok.

grant
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-19 Thread Paul Mackerras
Ingo Molnar writes:

> * Peter Chubb <[EMAIL PROTECTED]> wrote:
> 
> > >> Here's a patch that adds the missing read_is_locked() and
> > >> write_is_locked() macros for IA64.  When combined with Ingo's
> > >> patch, I can boot an SMP kernel with CONFIG_PREEMPT on.
> > >> 
> > >> However, I feel these macros are misnamed: read_is_locked() returns
> > >> true if the lock is held for writing; write_is_locked() returns
> > >> true if the lock is held for reading or writing.
> > 
> > Ingo> well, 'read_is_locked()' means: "will a read_lock() succeed"
> > 
> > Fail, surely?
> 
> yeah ... and with that i proved beyond doubt that the naming is indeed
> unintuitive :-)

Yes.  Intuitively read_is_locked() is true when someone has done a
read_lock and write_is_locked() is true when someone has done a write
lock.

I suggest read_poll(), write_poll(), spin_poll(), which are like
{read,write,spin}_trylock but don't do the atomic op to get the lock,
that is, they don't change the lock value but return true if the
trylock would succeed, assuming no other cpu takes the lock in the
meantime.

Regards,
Paul.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-19 Thread Ingo Molnar

* Peter Chubb <[EMAIL PROTECTED]> wrote:

> >> Here's a patch that adds the missing read_is_locked() and
> >> write_is_locked() macros for IA64.  When combined with Ingo's
> >> patch, I can boot an SMP kernel with CONFIG_PREEMPT on.
> >> 
> >> However, I feel these macros are misnamed: read_is_locked() returns
> >> true if the lock is held for writing; write_is_locked() returns
> >> true if the lock is held for reading or writing.
> 
> Ingo> well, 'read_is_locked()' means: "will a read_lock() succeed"
> 
> Fail, surely?

yeah ... and with that i proved beyond doubt that the naming is indeed
unintuitive :-)

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-19 Thread Peter Chubb
> "Ingo" == Ingo Molnar <[EMAIL PROTECTED]> writes:

Ingo> * Peter Chubb <[EMAIL PROTECTED]> wrote:

>> Here's a patch that adds the missing read_is_locked() and
>> write_is_locked() macros for IA64.  When combined with Ingo's
>> patch, I can boot an SMP kernel with CONFIG_PREEMPT on.
>> 
>> However, I feel these macros are misnamed: read_is_locked() returns
>> true if the lock is held for writing; write_is_locked() returns
>> true if the lock is held for reading or writing.

Ingo> well, 'read_is_locked()' means: "will a read_lock() succeed"

Fail, surely?

-- 
Dr Peter Chubb  http://www.gelato.unsw.edu.au  peterc AT gelato.unsw.edu.au
The technical we do immediately,  the political takes *forever*
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-19 Thread Ingo Molnar

* Peter Chubb <[EMAIL PROTECTED]> wrote:

> Here's a patch that adds the missing read_is_locked() and
> write_is_locked() macros for IA64.  When combined with Ingo's patch, I
> can boot an SMP kernel with CONFIG_PREEMPT on.
> 
> However, I feel these macros are misnamed: read_is_locked() returns
> true if the lock is held for writing; write_is_locked() returns true
> if the lock is held for reading or writing.

well, 'read_is_locked()' means: "will a read_lock() succeed" [assuming
no races]. Should name it read_trylock_test()/write_trylock_test()
perhaps?

Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-19 Thread Ingo Molnar

* Peter Chubb [EMAIL PROTECTED] wrote:

 Here's a patch that adds the missing read_is_locked() and
 write_is_locked() macros for IA64.  When combined with Ingo's patch, I
 can boot an SMP kernel with CONFIG_PREEMPT on.
 
 However, I feel these macros are misnamed: read_is_locked() returns
 true if the lock is held for writing; write_is_locked() returns true
 if the lock is held for reading or writing.

well, 'read_is_locked()' means: will a read_lock() succeed [assuming
no races]. Should name it read_trylock_test()/write_trylock_test()
perhaps?

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-19 Thread Peter Chubb
 Ingo == Ingo Molnar [EMAIL PROTECTED] writes:

Ingo * Peter Chubb [EMAIL PROTECTED] wrote:

 Here's a patch that adds the missing read_is_locked() and
 write_is_locked() macros for IA64.  When combined with Ingo's
 patch, I can boot an SMP kernel with CONFIG_PREEMPT on.
 
 However, I feel these macros are misnamed: read_is_locked() returns
 true if the lock is held for writing; write_is_locked() returns
 true if the lock is held for reading or writing.

Ingo well, 'read_is_locked()' means: will a read_lock() succeed

Fail, surely?

-- 
Dr Peter Chubb  http://www.gelato.unsw.edu.au  peterc AT gelato.unsw.edu.au
The technical we do immediately,  the political takes *forever*
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-19 Thread Ingo Molnar

* Peter Chubb [EMAIL PROTECTED] wrote:

  Here's a patch that adds the missing read_is_locked() and
  write_is_locked() macros for IA64.  When combined with Ingo's
  patch, I can boot an SMP kernel with CONFIG_PREEMPT on.
  
  However, I feel these macros are misnamed: read_is_locked() returns
  true if the lock is held for writing; write_is_locked() returns
  true if the lock is held for reading or writing.
 
 Ingo well, 'read_is_locked()' means: will a read_lock() succeed
 
 Fail, surely?

yeah ... and with that i proved beyond doubt that the naming is indeed
unintuitive :-)

Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-19 Thread Paul Mackerras
Ingo Molnar writes:

 * Peter Chubb [EMAIL PROTECTED] wrote:
 
   Here's a patch that adds the missing read_is_locked() and
   write_is_locked() macros for IA64.  When combined with Ingo's
   patch, I can boot an SMP kernel with CONFIG_PREEMPT on.
   
   However, I feel these macros are misnamed: read_is_locked() returns
   true if the lock is held for writing; write_is_locked() returns
   true if the lock is held for reading or writing.
  
  Ingo well, 'read_is_locked()' means: will a read_lock() succeed
  
  Fail, surely?
 
 yeah ... and with that i proved beyond doubt that the naming is indeed
 unintuitive :-)

Yes.  Intuitively read_is_locked() is true when someone has done a
read_lock and write_is_locked() is true when someone has done a write
lock.

I suggest read_poll(), write_poll(), spin_poll(), which are like
{read,write,spin}_trylock but don't do the atomic op to get the lock,
that is, they don't change the lock value but return true if the
trylock would succeed, assuming no other cpu takes the lock in the
meantime.

Regards,
Paul.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-19 Thread Grant Grundler
On Thu, Jan 20, 2005 at 08:43:30AM +1100, Paul Mackerras wrote:
 I suggest read_poll(), write_poll(), spin_poll(),...

Erm...those names sound way too much like existing interfaces.
Perhaps check_read_lock()/check_write_lock() ?

If they don't get used too much, the longer name should be ok.

grant
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-18 Thread Peter Chubb


Here's a patch that adds the missing read_is_locked() and
write_is_locked() macros for IA64.  When combined with Ingo's patch, I
can boot an SMP kernel with CONFIG_PREEMPT on.

However, I feel these macros are misnamed: read_is_locked() returns true if
the lock is held for writing; write_is_locked() returns true if the
lock is held for reading or writing.

Signed-off-by: Peter Chubb <[EMAIL PROTECTED]>

Index: linux-2.6-bklock/include/asm-ia64/spinlock.h
===
--- linux-2.6-bklock.orig/include/asm-ia64/spinlock.h   2005-01-18 
13:46:08.138077857 +1100
+++ linux-2.6-bklock/include/asm-ia64/spinlock.h2005-01-19 
08:58:59.303821753 +1100
@@ -126,8 +126,20 @@
 #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 }
 
 #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
+
 #define rwlock_is_locked(x)(*(volatile int *) (x) != 0)
 
+/* read_is_locked --  - would read_trylock() fail?
+ * @lock: the rwlock in question.
+ */
+#define read_is_locked(x)   (*(volatile int *) (x) < 0)
+
+/**
+ * write_is_locked - would write_trylock() fail?
+ * @lock: the rwlock in question.
+ */
+#define write_is_locked(x) (*(volatile int *) (x) != 0)
+
 #define _raw_read_lock(rw) 
\
 do {   
\
rwlock_t *__read_lock_ptr = (rw);   
\

-- 
Dr Peter Chubb  http://www.gelato.unsw.edu.au  peterc AT gelato.unsw.edu.au
The technical we do immediately,  the political takes *forever*
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-18 Thread Peter Chubb


Here's a patch that adds the missing read_is_locked() and
write_is_locked() macros for IA64.  When combined with Ingo's patch, I
can boot an SMP kernel with CONFIG_PREEMPT on.

However, I feel these macros are misnamed: read_is_locked() returns true if
the lock is held for writing; write_is_locked() returns true if the
lock is held for reading or writing.

Signed-off-by: Peter Chubb [EMAIL PROTECTED]

Index: linux-2.6-bklock/include/asm-ia64/spinlock.h
===
--- linux-2.6-bklock.orig/include/asm-ia64/spinlock.h   2005-01-18 
13:46:08.138077857 +1100
+++ linux-2.6-bklock/include/asm-ia64/spinlock.h2005-01-19 
08:58:59.303821753 +1100
@@ -126,8 +126,20 @@
 #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 }
 
 #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
+
 #define rwlock_is_locked(x)(*(volatile int *) (x) != 0)
 
+/* read_is_locked --  - would read_trylock() fail?
+ * @lock: the rwlock in question.
+ */
+#define read_is_locked(x)   (*(volatile int *) (x)  0)
+
+/**
+ * write_is_locked - would write_trylock() fail?
+ * @lock: the rwlock in question.
+ */
+#define write_is_locked(x) (*(volatile int *) (x) != 0)
+
 #define _raw_read_lock(rw) 
\
 do {   
\
rwlock_t *__read_lock_ptr = (rw);   
\

-- 
Dr Peter Chubb  http://www.gelato.unsw.edu.au  peterc AT gelato.unsw.edu.au
The technical we do immediately,  the political takes *forever*
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-17 Thread Chris Wedgwood
On Tue, Jan 18, 2005 at 03:28:58PM +1100, Darren Williams wrote:

> On top of Ingo's patch I attempt a solution that failed:

> +#define read_is_locked(x)(*(volatile int *) (x) > 0)
> +#define write_is_locked(x)   (*(volatile int *) (x) < 0)

how about something like:

#define read_is_locked(x)(*(volatile int *) (x) != 0)
#define write_is_locked(x)   (*(volatile int *) (x) & (1<<31))

I'm not masking the write-bit for read_is_locked here, I'm not sure is
we should?


   --cw
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-17 Thread Darren Williams

On Tue, 18 Jan 2005, Darren Williams wrote:

> Hi Ingo
> 
> On Mon, 17 Jan 2005, Ingo Molnar wrote:
> 
> > 
> > * Andrew Morton <[EMAIL PROTECTED]> wrote:
> > 
> > > > +BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
> > > > +BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
> > > > +BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
> > > 
> > > If you replace the last line with
> > > 
> > >   BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
> > > 
> > > does it help?
> > 
> > this is not enough - the proper solution should be the patch below,
> > which i sent earlier today as a reply to Paul Mackerras' comments.
> > 
> > Ingo
> > 
> > --
> > the first fix is that there was no compiler warning on x86 because it
> > uses macros - i fixed this by changing the spinlock field to be
> > '->slock'. (we could also use inline functions to get type protection, i
> > chose this solution because it was the easiest to do.)
> > 
> > the second fix is to split rwlock_is_locked() into two functions:
> > 
> >  +/**
> >  + * read_is_locked - would read_trylock() fail?
> >  + * @lock: the rwlock in question.
> >  + */
> >  +#define read_is_locked(x) (atomic_read((atomic_t *)&(x)->lock) <= 0)
> >  +
> >  +/**
> >  + * write_is_locked - would write_trylock() fail?
> >  + * @lock: the rwlock in question.
> >  + */
> >  +#define write_is_locked(x) ((x)->lock != RW_LOCK_BIAS)
> > 
> > this canonical naming of them also enabled the elimination of the newly
> > added 'is_locked_fn' argument to the BUILD_LOCK_OPS macro.
> > 
> > the third change was to change the other user of rwlock_is_locked(), and
> > to put a migration helper there: architectures that dont have
> > read/write_is_locked defined yet will get a #warning message but the
> > build will succeed. (except if PREEMPT is enabled - there we really
> > need.)
> > 
> > compile and boot-tested on x86, on SMP and UP, PREEMPT and !PREEMPT. 
> > Non-x86 architectures should work fine, except PREEMPT+SMP builds which
> > will need the read_is_locked()/write_is_locked() definitions.
> > !PREEMPT+SMP builds will work fine and will produce a #warning.
> > 
> > Ingo
> This may fix some archs however ia64 is still broken, with:
> 
> kernel/built-in.o(.spinlock.text+0x8b2): In function `sched_init_smp':
> kernel/sched.c:650: undefined reference to `read_is_locked'
> kernel/built-in.o(.spinlock.text+0xa52): In function `sched_init':
> kernel/sched.c:687: undefined reference to `read_is_locked'
> kernel/built-in.o(.spinlock.text+0xcb2): In function `schedule':
> include/asm/bitops.h:279: undefined reference to `write_is_locked'
> kernel/built-in.o(.spinlock.text+0xe72): In function `schedule':
> include/linux/sched.h:1122: undefined reference to `write_is_locked'
> make: *** [.tmp_vmlinux1] Error 1
> 
> include/asm-ia64/spinflock.h needs to define:
>  read_is_locked(x)
>  write_is_locked(x)
> 
> someone who knows the locking code will need to fill in
> the blanks.
> 

On top of Ingo's patch I attempt a solution that failed:
include/asm-ia64/spinlock.h: 1.23 1.24 dsw 05/01/18 10:22:35 (modified, needs 
delta)

@@ -126,7 +126,8 @@
 #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 }
 
 #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
-#define rwlock_is_locked(x)(*(volatile int *) (x) != 0)
+#define read_is_locked(x)  (*(volatile int *) (x) > 0)
+#define write_is_locked(x) (*(volatile int *) (x) < 0)
 
 #define _raw_read_lock(rw) 
\
 do {   


However this breaks on the simulator with:

Freeing unused kernel memory: 192kB freed
INIT: version 2.78 booting
kernel BUG at kernel/exit.c:870


> Darren
> 
> > 
> > Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
> > 
> > --- linux/kernel/spinlock.c.orig
> > +++ linux/kernel/spinlock.c
> > @@ -173,7 +173,7 @@ EXPORT_SYMBOL(_write_lock);
> >   * (We do this in a function because inlining it would be excessive.)
> >   */
> >  
> > -#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \
> > +#define BUILD_LOCK_OPS(op, locktype)   
> > \
> >  void __lockfunc _##op##_lock(locktype *lock)   
> > \
> >  {  \
> > preempt_disable();  \
> > @@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l
> > preempt_enable();   \
> > if (!(lock)->break_lock)\
> > (lock)->break_lock = 1; \
> > -   while (is_locked_fn(lock) && (lock)->break_lock)\
> > +   while (op##_is_locked(lock) && (lock)->break_lock)  \
> > cpu_relax();\
> > preempt_disable();  

Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-17 Thread Darren Williams
Hi Ingo

On Mon, 17 Jan 2005, Ingo Molnar wrote:

> 
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
> 
> > > +BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
> > > +BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
> > > +BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
> > 
> > If you replace the last line with
> > 
> > BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
> > 
> > does it help?
> 
> this is not enough - the proper solution should be the patch below,
> which i sent earlier today as a reply to Paul Mackerras' comments.
> 
>   Ingo
> 
> --
> the first fix is that there was no compiler warning on x86 because it
> uses macros - i fixed this by changing the spinlock field to be
> '->slock'. (we could also use inline functions to get type protection, i
> chose this solution because it was the easiest to do.)
> 
> the second fix is to split rwlock_is_locked() into two functions:
> 
>  +/**
>  + * read_is_locked - would read_trylock() fail?
>  + * @lock: the rwlock in question.
>  + */
>  +#define read_is_locked(x) (atomic_read((atomic_t *)&(x)->lock) <= 0)
>  +
>  +/**
>  + * write_is_locked - would write_trylock() fail?
>  + * @lock: the rwlock in question.
>  + */
>  +#define write_is_locked(x) ((x)->lock != RW_LOCK_BIAS)
> 
> this canonical naming of them also enabled the elimination of the newly
> added 'is_locked_fn' argument to the BUILD_LOCK_OPS macro.
> 
> the third change was to change the other user of rwlock_is_locked(), and
> to put a migration helper there: architectures that dont have
> read/write_is_locked defined yet will get a #warning message but the
> build will succeed. (except if PREEMPT is enabled - there we really
> need.)
> 
> compile and boot-tested on x86, on SMP and UP, PREEMPT and !PREEMPT. 
> Non-x86 architectures should work fine, except PREEMPT+SMP builds which
> will need the read_is_locked()/write_is_locked() definitions.
> !PREEMPT+SMP builds will work fine and will produce a #warning.
> 
>   Ingo
This may fix some archs however ia64 is still broken, with:

kernel/built-in.o(.spinlock.text+0x8b2): In function `sched_init_smp':
kernel/sched.c:650: undefined reference to `read_is_locked'
kernel/built-in.o(.spinlock.text+0xa52): In function `sched_init':
kernel/sched.c:687: undefined reference to `read_is_locked'
kernel/built-in.o(.spinlock.text+0xcb2): In function `schedule':
include/asm/bitops.h:279: undefined reference to `write_is_locked'
kernel/built-in.o(.spinlock.text+0xe72): In function `schedule':
include/linux/sched.h:1122: undefined reference to `write_is_locked'
make: *** [.tmp_vmlinux1] Error 1

include/asm-ia64/spinflock.h needs to define:
 read_is_locked(x)
 write_is_locked(x)

someone who knows the locking code will need to fill in
the blanks.

Darren

> 
> Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
> 
> --- linux/kernel/spinlock.c.orig
> +++ linux/kernel/spinlock.c
> @@ -173,7 +173,7 @@ EXPORT_SYMBOL(_write_lock);
>   * (We do this in a function because inlining it would be excessive.)
>   */
>  
> -#define BUILD_LOCK_OPS(op, locktype, is_locked_fn)   \
> +#define BUILD_LOCK_OPS(op, locktype) \
>  void __lockfunc _##op##_lock(locktype *lock) \
>  {\
>   preempt_disable();  \
> @@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l
>   preempt_enable();   \
>   if (!(lock)->break_lock)\
>   (lock)->break_lock = 1; \
> - while (is_locked_fn(lock) && (lock)->break_lock)\
> + while (op##_is_locked(lock) && (lock)->break_lock)  \
>   cpu_relax();\
>   preempt_disable();  \
>   }   \
> @@ -205,7 +205,7 @@ unsigned long __lockfunc _##op##_lock_ir
>   preempt_enable();   \
>   if (!(lock)->break_lock)\
>   (lock)->break_lock = 1; \
> - while (is_locked_fn(lock) && (lock)->break_lock)\
> + while (op##_is_locked(lock) && (lock)->break_lock)  \
>   cpu_relax();\
>   preempt_disable();  \
>   }   \
> @@ -246,9 +246,9 @@ EXPORT_SYMBOL(_##op##_lock_bh)
>   * _[spin|read|write]_lock_irqsave()
>   * _[spin|read|write]_lock_bh()
>   */
> -BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
> -BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
> -BUILD_LOCK_OPS(write, rwlock_t, 

Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-17 Thread Ingo Molnar

* Andrew Morton <[EMAIL PROTECTED]> wrote:

> > +BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
> > +BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
> > +BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
> 
> If you replace the last line with
> 
>   BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
> 
> does it help?

this is not enough - the proper solution should be the patch below,
which i sent earlier today as a reply to Paul Mackerras' comments.

Ingo

--
the first fix is that there was no compiler warning on x86 because it
uses macros - i fixed this by changing the spinlock field to be
'->slock'. (we could also use inline functions to get type protection, i
chose this solution because it was the easiest to do.)

the second fix is to split rwlock_is_locked() into two functions:

 +/**
 + * read_is_locked - would read_trylock() fail?
 + * @lock: the rwlock in question.
 + */
 +#define read_is_locked(x) (atomic_read((atomic_t *)&(x)->lock) <= 0)
 +
 +/**
 + * write_is_locked - would write_trylock() fail?
 + * @lock: the rwlock in question.
 + */
 +#define write_is_locked(x) ((x)->lock != RW_LOCK_BIAS)

this canonical naming of them also enabled the elimination of the newly
added 'is_locked_fn' argument to the BUILD_LOCK_OPS macro.

the third change was to change the other user of rwlock_is_locked(), and
to put a migration helper there: architectures that dont have
read/write_is_locked defined yet will get a #warning message but the
build will succeed. (except if PREEMPT is enabled - there we really
need.)

compile and boot-tested on x86, on SMP and UP, PREEMPT and !PREEMPT. 
Non-x86 architectures should work fine, except PREEMPT+SMP builds which
will need the read_is_locked()/write_is_locked() definitions.
!PREEMPT+SMP builds will work fine and will produce a #warning.

Ingo

Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>

--- linux/kernel/spinlock.c.orig
+++ linux/kernel/spinlock.c
@@ -173,7 +173,7 @@ EXPORT_SYMBOL(_write_lock);
  * (We do this in a function because inlining it would be excessive.)
  */
 
-#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \
+#define BUILD_LOCK_OPS(op, locktype)   \
 void __lockfunc _##op##_lock(locktype *lock)   \
 {  \
preempt_disable();  \
@@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l
preempt_enable();   \
if (!(lock)->break_lock)\
(lock)->break_lock = 1; \
-   while (is_locked_fn(lock) && (lock)->break_lock)\
+   while (op##_is_locked(lock) && (lock)->break_lock)  \
cpu_relax();\
preempt_disable();  \
}   \
@@ -205,7 +205,7 @@ unsigned long __lockfunc _##op##_lock_ir
preempt_enable();   \
if (!(lock)->break_lock)\
(lock)->break_lock = 1; \
-   while (is_locked_fn(lock) && (lock)->break_lock)\
+   while (op##_is_locked(lock) && (lock)->break_lock)  \
cpu_relax();\
preempt_disable();  \
}   \
@@ -246,9 +246,9 @@ EXPORT_SYMBOL(_##op##_lock_bh)
  * _[spin|read|write]_lock_irqsave()
  * _[spin|read|write]_lock_bh()
  */
-BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
-BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
-BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
+BUILD_LOCK_OPS(spin, spinlock_t);
+BUILD_LOCK_OPS(read, rwlock_t);
+BUILD_LOCK_OPS(write, rwlock_t);
 
 #endif /* CONFIG_PREEMPT */
 
--- linux/include/asm-i386/spinlock.h.orig
+++ linux/include/asm-i386/spinlock.h
@@ -15,7 +15,7 @@ asmlinkage int printk(const char * fmt, 
  */
 
 typedef struct {
-   volatile unsigned int lock;
+   volatile unsigned int slock;
 #ifdef CONFIG_DEBUG_SPINLOCK
unsigned magic;
 #endif
@@ -43,7 +43,7 @@ typedef struct {
  * We make no fairness assumptions. They have a cost.
  */
 
-#define spin_is_locked(x)  (*(volatile signed char *)(&(x)->lock) <= 0)
+#define spin_is_locked(x)  (*(volatile signed char *)(&(x)->slock) <= 0)
 #define spin_unlock_wait(x)do { barrier(); } while(spin_is_locked(x))
 
 #define spin_lock_string \
@@ -83,7 +83,7 @@ typedef struct {
 
 #define spin_unlock_string \
"movb $1,%0" \
-   :"=m" (lock->lock) : : "memory"
+   :"=m" 

Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-17 Thread Chris Wedgwood
On Mon, Jan 17, 2005 at 06:50:57PM +1100, Paul Mackerras wrote:

> AFAICS on i386 the lock word, although it goes to 0 when write-locked,
> can then go negative temporarily when another cpu tries to get a read
> or write lock.  So I think this should be
> 
> ((signed int)(x)->lock <= 0)

I think you're right, objdump -d shows[1] _write_lock looks
something like:

  lock subl $0x100,(%ebx)
  sete   %al
  test   %al,%al
  jneout;
  lock addl $0x100,(%ebx)
  out:

so I guess it 2+ CPUs grab a write-lock at once it would indeed be
less than zero (the initial value is RW_LOCK_BIAS which is 0x100
in this case).

> (or the equivalent using atomic_read).

on x86 aligned-reads will be always be atomic AFAIK?


[1] Yes, I'm stupid, trying to grok the twisty-turny-maze of headers
and what not made my head hurt and objdump -d seemed easier.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-17 Thread Chris Wedgwood
On Mon, Jan 17, 2005 at 06:50:57PM +1100, Paul Mackerras wrote:

 AFAICS on i386 the lock word, although it goes to 0 when write-locked,
 can then go negative temporarily when another cpu tries to get a read
 or write lock.  So I think this should be
 
 ((signed int)(x)-lock = 0)

I think you're right, objdump -d shows[1] _write_lock looks
something like:

  lock subl $0x100,(%ebx)
  sete   %al
  test   %al,%al
  jneout;
  lock addl $0x100,(%ebx)
  out:

so I guess it 2+ CPUs grab a write-lock at once it would indeed be
less than zero (the initial value is RW_LOCK_BIAS which is 0x100
in this case).

 (or the equivalent using atomic_read).

on x86 aligned-reads will be always be atomic AFAIK?


[1] Yes, I'm stupid, trying to grok the twisty-turny-maze of headers
and what not made my head hurt and objdump -d seemed easier.

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-17 Thread Ingo Molnar

* Andrew Morton [EMAIL PROTECTED] wrote:

  +BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
  +BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
  +BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
 
 If you replace the last line with
 
   BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
 
 does it help?

this is not enough - the proper solution should be the patch below,
which i sent earlier today as a reply to Paul Mackerras' comments.

Ingo

--
the first fix is that there was no compiler warning on x86 because it
uses macros - i fixed this by changing the spinlock field to be
'-slock'. (we could also use inline functions to get type protection, i
chose this solution because it was the easiest to do.)

the second fix is to split rwlock_is_locked() into two functions:

 +/**
 + * read_is_locked - would read_trylock() fail?
 + * @lock: the rwlock in question.
 + */
 +#define read_is_locked(x) (atomic_read((atomic_t *)(x)-lock) = 0)
 +
 +/**
 + * write_is_locked - would write_trylock() fail?
 + * @lock: the rwlock in question.
 + */
 +#define write_is_locked(x) ((x)-lock != RW_LOCK_BIAS)

this canonical naming of them also enabled the elimination of the newly
added 'is_locked_fn' argument to the BUILD_LOCK_OPS macro.

the third change was to change the other user of rwlock_is_locked(), and
to put a migration helper there: architectures that dont have
read/write_is_locked defined yet will get a #warning message but the
build will succeed. (except if PREEMPT is enabled - there we really
need.)

compile and boot-tested on x86, on SMP and UP, PREEMPT and !PREEMPT. 
Non-x86 architectures should work fine, except PREEMPT+SMP builds which
will need the read_is_locked()/write_is_locked() definitions.
!PREEMPT+SMP builds will work fine and will produce a #warning.

Ingo

Signed-off-by: Ingo Molnar [EMAIL PROTECTED]

--- linux/kernel/spinlock.c.orig
+++ linux/kernel/spinlock.c
@@ -173,7 +173,7 @@ EXPORT_SYMBOL(_write_lock);
  * (We do this in a function because inlining it would be excessive.)
  */
 
-#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \
+#define BUILD_LOCK_OPS(op, locktype)   \
 void __lockfunc _##op##_lock(locktype *lock)   \
 {  \
preempt_disable();  \
@@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l
preempt_enable();   \
if (!(lock)-break_lock)\
(lock)-break_lock = 1; \
-   while (is_locked_fn(lock)  (lock)-break_lock)\
+   while (op##_is_locked(lock)  (lock)-break_lock)  \
cpu_relax();\
preempt_disable();  \
}   \
@@ -205,7 +205,7 @@ unsigned long __lockfunc _##op##_lock_ir
preempt_enable();   \
if (!(lock)-break_lock)\
(lock)-break_lock = 1; \
-   while (is_locked_fn(lock)  (lock)-break_lock)\
+   while (op##_is_locked(lock)  (lock)-break_lock)  \
cpu_relax();\
preempt_disable();  \
}   \
@@ -246,9 +246,9 @@ EXPORT_SYMBOL(_##op##_lock_bh)
  * _[spin|read|write]_lock_irqsave()
  * _[spin|read|write]_lock_bh()
  */
-BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
-BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
-BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
+BUILD_LOCK_OPS(spin, spinlock_t);
+BUILD_LOCK_OPS(read, rwlock_t);
+BUILD_LOCK_OPS(write, rwlock_t);
 
 #endif /* CONFIG_PREEMPT */
 
--- linux/include/asm-i386/spinlock.h.orig
+++ linux/include/asm-i386/spinlock.h
@@ -15,7 +15,7 @@ asmlinkage int printk(const char * fmt, 
  */
 
 typedef struct {
-   volatile unsigned int lock;
+   volatile unsigned int slock;
 #ifdef CONFIG_DEBUG_SPINLOCK
unsigned magic;
 #endif
@@ -43,7 +43,7 @@ typedef struct {
  * We make no fairness assumptions. They have a cost.
  */
 
-#define spin_is_locked(x)  (*(volatile signed char *)((x)-lock) = 0)
+#define spin_is_locked(x)  (*(volatile signed char *)((x)-slock) = 0)
 #define spin_unlock_wait(x)do { barrier(); } while(spin_is_locked(x))
 
 #define spin_lock_string \
@@ -83,7 +83,7 @@ typedef struct {
 
 #define spin_unlock_string \
movb $1,%0 \
-   :=m (lock-lock) : : memory
+   :=m (lock-slock) : : memory
 
 
 static inline void 

Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-17 Thread Darren Williams
Hi Ingo

On Mon, 17 Jan 2005, Ingo Molnar wrote:

 
 * Andrew Morton [EMAIL PROTECTED] wrote:
 
   +BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
   +BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
   +BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
  
  If you replace the last line with
  
  BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
  
  does it help?
 
 this is not enough - the proper solution should be the patch below,
 which i sent earlier today as a reply to Paul Mackerras' comments.
 
   Ingo
 
 --
 the first fix is that there was no compiler warning on x86 because it
 uses macros - i fixed this by changing the spinlock field to be
 '-slock'. (we could also use inline functions to get type protection, i
 chose this solution because it was the easiest to do.)
 
 the second fix is to split rwlock_is_locked() into two functions:
 
  +/**
  + * read_is_locked - would read_trylock() fail?
  + * @lock: the rwlock in question.
  + */
  +#define read_is_locked(x) (atomic_read((atomic_t *)(x)-lock) = 0)
  +
  +/**
  + * write_is_locked - would write_trylock() fail?
  + * @lock: the rwlock in question.
  + */
  +#define write_is_locked(x) ((x)-lock != RW_LOCK_BIAS)
 
 this canonical naming of them also enabled the elimination of the newly
 added 'is_locked_fn' argument to the BUILD_LOCK_OPS macro.
 
 the third change was to change the other user of rwlock_is_locked(), and
 to put a migration helper there: architectures that dont have
 read/write_is_locked defined yet will get a #warning message but the
 build will succeed. (except if PREEMPT is enabled - there we really
 need.)
 
 compile and boot-tested on x86, on SMP and UP, PREEMPT and !PREEMPT. 
 Non-x86 architectures should work fine, except PREEMPT+SMP builds which
 will need the read_is_locked()/write_is_locked() definitions.
 !PREEMPT+SMP builds will work fine and will produce a #warning.
 
   Ingo
This may fix some archs however ia64 is still broken, with:

kernel/built-in.o(.spinlock.text+0x8b2): In function `sched_init_smp':
kernel/sched.c:650: undefined reference to `read_is_locked'
kernel/built-in.o(.spinlock.text+0xa52): In function `sched_init':
kernel/sched.c:687: undefined reference to `read_is_locked'
kernel/built-in.o(.spinlock.text+0xcb2): In function `schedule':
include/asm/bitops.h:279: undefined reference to `write_is_locked'
kernel/built-in.o(.spinlock.text+0xe72): In function `schedule':
include/linux/sched.h:1122: undefined reference to `write_is_locked'
make: *** [.tmp_vmlinux1] Error 1

include/asm-ia64/spinflock.h needs to define:
 read_is_locked(x)
 write_is_locked(x)

someone who knows the locking code will need to fill in
the blanks.

Darren

 
 Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
 
 --- linux/kernel/spinlock.c.orig
 +++ linux/kernel/spinlock.c
 @@ -173,7 +173,7 @@ EXPORT_SYMBOL(_write_lock);
   * (We do this in a function because inlining it would be excessive.)
   */
  
 -#define BUILD_LOCK_OPS(op, locktype, is_locked_fn)   \
 +#define BUILD_LOCK_OPS(op, locktype) \
  void __lockfunc _##op##_lock(locktype *lock) \
  {\
   preempt_disable();  \
 @@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l
   preempt_enable();   \
   if (!(lock)-break_lock)\
   (lock)-break_lock = 1; \
 - while (is_locked_fn(lock)  (lock)-break_lock)\
 + while (op##_is_locked(lock)  (lock)-break_lock)  \
   cpu_relax();\
   preempt_disable();  \
   }   \
 @@ -205,7 +205,7 @@ unsigned long __lockfunc _##op##_lock_ir
   preempt_enable();   \
   if (!(lock)-break_lock)\
   (lock)-break_lock = 1; \
 - while (is_locked_fn(lock)  (lock)-break_lock)\
 + while (op##_is_locked(lock)  (lock)-break_lock)  \
   cpu_relax();\
   preempt_disable();  \
   }   \
 @@ -246,9 +246,9 @@ EXPORT_SYMBOL(_##op##_lock_bh)
   * _[spin|read|write]_lock_irqsave()
   * _[spin|read|write]_lock_bh()
   */
 -BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
 -BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
 -BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
 +BUILD_LOCK_OPS(spin, spinlock_t);
 +BUILD_LOCK_OPS(read, rwlock_t);
 +BUILD_LOCK_OPS(write, rwlock_t);
  
  #endif 

Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-17 Thread Darren Williams

On Tue, 18 Jan 2005, Darren Williams wrote:

 Hi Ingo
 
 On Mon, 17 Jan 2005, Ingo Molnar wrote:
 
  
  * Andrew Morton [EMAIL PROTECTED] wrote:
  
+BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
+BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
+BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
   
   If you replace the last line with
   
 BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
   
   does it help?
  
  this is not enough - the proper solution should be the patch below,
  which i sent earlier today as a reply to Paul Mackerras' comments.
  
  Ingo
  
  --
  the first fix is that there was no compiler warning on x86 because it
  uses macros - i fixed this by changing the spinlock field to be
  '-slock'. (we could also use inline functions to get type protection, i
  chose this solution because it was the easiest to do.)
  
  the second fix is to split rwlock_is_locked() into two functions:
  
   +/**
   + * read_is_locked - would read_trylock() fail?
   + * @lock: the rwlock in question.
   + */
   +#define read_is_locked(x) (atomic_read((atomic_t *)(x)-lock) = 0)
   +
   +/**
   + * write_is_locked - would write_trylock() fail?
   + * @lock: the rwlock in question.
   + */
   +#define write_is_locked(x) ((x)-lock != RW_LOCK_BIAS)
  
  this canonical naming of them also enabled the elimination of the newly
  added 'is_locked_fn' argument to the BUILD_LOCK_OPS macro.
  
  the third change was to change the other user of rwlock_is_locked(), and
  to put a migration helper there: architectures that dont have
  read/write_is_locked defined yet will get a #warning message but the
  build will succeed. (except if PREEMPT is enabled - there we really
  need.)
  
  compile and boot-tested on x86, on SMP and UP, PREEMPT and !PREEMPT. 
  Non-x86 architectures should work fine, except PREEMPT+SMP builds which
  will need the read_is_locked()/write_is_locked() definitions.
  !PREEMPT+SMP builds will work fine and will produce a #warning.
  
  Ingo
 This may fix some archs however ia64 is still broken, with:
 
 kernel/built-in.o(.spinlock.text+0x8b2): In function `sched_init_smp':
 kernel/sched.c:650: undefined reference to `read_is_locked'
 kernel/built-in.o(.spinlock.text+0xa52): In function `sched_init':
 kernel/sched.c:687: undefined reference to `read_is_locked'
 kernel/built-in.o(.spinlock.text+0xcb2): In function `schedule':
 include/asm/bitops.h:279: undefined reference to `write_is_locked'
 kernel/built-in.o(.spinlock.text+0xe72): In function `schedule':
 include/linux/sched.h:1122: undefined reference to `write_is_locked'
 make: *** [.tmp_vmlinux1] Error 1
 
 include/asm-ia64/spinflock.h needs to define:
  read_is_locked(x)
  write_is_locked(x)
 
 someone who knows the locking code will need to fill in
 the blanks.
 

On top of Ingo's patch I attempt a solution that failed:
include/asm-ia64/spinlock.h: 1.23 1.24 dsw 05/01/18 10:22:35 (modified, needs 
delta)

@@ -126,7 +126,8 @@
 #define RW_LOCK_UNLOCKED (rwlock_t) { 0, 0 }
 
 #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
-#define rwlock_is_locked(x)(*(volatile int *) (x) != 0)
+#define read_is_locked(x)  (*(volatile int *) (x)  0)
+#define write_is_locked(x) (*(volatile int *) (x)  0)
 
 #define _raw_read_lock(rw) 
\
 do {   


However this breaks on the simulator with:

Freeing unused kernel memory: 192kB freed
INIT: version 2.78 booting
kernel BUG at kernel/exit.c:870


 Darren
 
  
  Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
  
  --- linux/kernel/spinlock.c.orig
  +++ linux/kernel/spinlock.c
  @@ -173,7 +173,7 @@ EXPORT_SYMBOL(_write_lock);
* (We do this in a function because inlining it would be excessive.)
*/
   
  -#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \
  +#define BUILD_LOCK_OPS(op, locktype)   
  \
   void __lockfunc _##op##_lock(locktype *lock)   
  \
   {  \
  preempt_disable();  \
  @@ -183,7 +183,7 @@ void __lockfunc _##op##_lock(locktype *l
  preempt_enable();   \
  if (!(lock)-break_lock)\
  (lock)-break_lock = 1; \
  -   while (is_locked_fn(lock)  (lock)-break_lock)\
  +   while (op##_is_locked(lock)  (lock)-break_lock)  \
  cpu_relax();\
  preempt_disable();  \
  }   \
  @@ -205,7 +205,7 @@ unsigned long __lockfunc _##op##_lock_ir
  preempt_enable();  

Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-17 Thread Chris Wedgwood
On Tue, Jan 18, 2005 at 03:28:58PM +1100, Darren Williams wrote:

 On top of Ingo's patch I attempt a solution that failed:

 +#define read_is_locked(x)(*(volatile int *) (x)  0)
 +#define write_is_locked(x)   (*(volatile int *) (x)  0)

how about something like:

#define read_is_locked(x)(*(volatile int *) (x) != 0)
#define write_is_locked(x)   (*(volatile int *) (x)  (131))

I'm not masking the write-bit for read_is_locked here, I'm not sure is
we should?


   --cw
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-16 Thread Paul Mackerras
Chris Wedgwood writes:

> +#define rwlock_is_write_locked(x) ((x)->lock == 0)

AFAICS on i386 the lock word, although it goes to 0 when write-locked,
can then go negative temporarily when another cpu tries to get a read
or write lock.  So I think this should be

((signed int)(x)->lock <= 0)

(or the equivalent using atomic_read).

Paul.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-16 Thread Chris Wedgwood
On Sun, Jan 16, 2005 at 11:09:22PM -0800, Andrew Morton wrote:

> If you replace the last line with
>
>   BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
>
> does it help?

Paul noticed that too so I came up with the patch below.

If it makes sense I can do the other architectures (I'm not sure == 0
is correct everywhere).  This is pretty much what I'm using now
without problems (it's either correct or it's almost correct and the
rwlock_is_write_locked hasn't thus far stuffed anything this boot).


---
Fix how we check for read and write rwlock_t locks.

Signed-off-by: Chris Wedgwood <[EMAIL PROTECTED]>

= include/asm-i386/spinlock.h 1.16 vs edited =
--- 1.16/include/asm-i386/spinlock.h2005-01-07 21:43:58 -08:00
+++ edited/include/asm-i386/spinlock.h  2005-01-16 23:23:50 -08:00
@@ -187,6 +187,7 @@ typedef struct {
 #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
 
 #define rwlock_is_locked(x) ((x)->lock != RW_LOCK_BIAS)
+#define rwlock_is_write_locked(x) ((x)->lock == 0)
 
 /*
  * On x86, we implement read-write locks as a 32-bit counter
= kernel/spinlock.c 1.4 vs edited =
--- 1.4/kernel/spinlock.c   2005-01-14 16:00:00 -08:00
+++ edited/kernel/spinlock.c2005-01-16 23:25:11 -08:00
@@ -247,8 +247,8 @@ EXPORT_SYMBOL(_##op##_lock_bh)
  * _[spin|read|write]_lock_bh()
  */
 BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
-BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
-BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
+BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_write_locked);
+BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
 
 #endif /* CONFIG_PREEMPT */
 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-16 Thread Andrew Morton
Chris Wedgwood <[EMAIL PROTECTED]> wrote:
>
> Linus,
> 
> The change below is causing major problems for me on a dual K7 with
> CONFIG_PREEMPT enabled (cset -x and rebuilding makes the machine
> usable again).
> 
> ...
> +BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
> +BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
> +BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);

If you replace the last line with

BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);

does it help?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Horrible regression with -CURRENT from "Don't busy-lock-loop in preemptable spinlocks" patch

2005-01-16 Thread Chris Wedgwood
Linus,

The change below is causing major problems for me on a dual K7 with
CONFIG_PREEMPT enabled (cset -x and rebuilding makes the machine
usable again).

This change was merged a couple of days ago so I'm surprised nobody
else has reported this.  I tried to find where this patch came from
but I don't see it on lkml only the bk history.

Note, even with this removed I'm still seeing a few (many actually)
"BUG: using smp_processor_id() in preemptible [0001] code: xxx"
messages which I've not seen before --- that might be unrelated but I
do see *many* such messages so I'm sure I would have noticed this
before or it would have broken something earlier.

Is this specific to the k7 or do other people also see this?



Thanks,

  --cw

---

# This is a BitKeeper generated diff -Nru style patch.
#
# ChangeSet
#   2005/01/15 09:40:45-08:00 [EMAIL PROTECTED] 
#   [PATCH] Don't busy-lock-loop in preemptable spinlocks
#   
#   Paul Mackerras points out that doing the _raw_spin_trylock each time
#   through the loop will generate tons of unnecessary bus traffic.
#   Instead, after we fail to get the lock we should poll it with simple
#   loads until we see that it is clear and then retry the atomic op. 
#   Assuming a reasonable cache design, the loads won't generate any bus
#   traffic until another cpu writes to the cacheline containing the lock.
#   
#   Agreed.
#   
#   Signed-off-by: Ingo Molnar <[EMAIL PROTECTED]>
#   Signed-off-by: Linus Torvalds <[EMAIL PROTECTED]>
# 
# kernel/spinlock.c
#   2005/01/14 16:00:00-08:00 [EMAIL PROTECTED] +8 -6
#   Don't busy-lock-loop in preemptable spinlocks
# 
diff -Nru a/kernel/spinlock.c b/kernel/spinlock.c
--- a/kernel/spinlock.c 2005-01-16 21:43:15 -08:00
+++ b/kernel/spinlock.c 2005-01-16 21:43:15 -08:00
@@ -173,7 +173,7 @@
  * (We do this in a function because inlining it would be excessive.)
  */
 
-#define BUILD_LOCK_OPS(op, locktype)   \
+#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \
 void __lockfunc _##op##_lock(locktype *lock)   \
 {  \
preempt_disable();  \
@@ -183,7 +183,8 @@
preempt_enable();   \
if (!(lock)->break_lock)\
(lock)->break_lock = 1; \
-   cpu_relax();\
+   while (is_locked_fn(lock) && (lock)->break_lock)\
+   cpu_relax();\
preempt_disable();  \
}   \
 }  \
@@ -204,7 +205,8 @@
preempt_enable();   \
if (!(lock)->break_lock)\
(lock)->break_lock = 1; \
-   cpu_relax();\
+   while (is_locked_fn(lock) && (lock)->break_lock)\
+   cpu_relax();\
preempt_disable();  \
}   \
return flags;   \
@@ -244,9 +246,9 @@
  * _[spin|read|write]_lock_irqsave()
  * _[spin|read|write]_lock_bh()
  */
-BUILD_LOCK_OPS(spin, spinlock_t);
-BUILD_LOCK_OPS(read, rwlock_t);
-BUILD_LOCK_OPS(write, rwlock_t);
+BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
+BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
+BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
 
 #endif /* CONFIG_PREEMPT */
 
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-16 Thread Chris Wedgwood
Linus,

The change below is causing major problems for me on a dual K7 with
CONFIG_PREEMPT enabled (cset -x and rebuilding makes the machine
usable again).

This change was merged a couple of days ago so I'm surprised nobody
else has reported this.  I tried to find where this patch came from
but I don't see it on lkml only the bk history.

Note, even with this removed I'm still seeing a few (many actually)
BUG: using smp_processor_id() in preemptible [0001] code: xxx
messages which I've not seen before --- that might be unrelated but I
do see *many* such messages so I'm sure I would have noticed this
before or it would have broken something earlier.

Is this specific to the k7 or do other people also see this?



Thanks,

  --cw

---

# This is a BitKeeper generated diff -Nru style patch.
#
# ChangeSet
#   2005/01/15 09:40:45-08:00 [EMAIL PROTECTED] 
#   [PATCH] Don't busy-lock-loop in preemptable spinlocks
#   
#   Paul Mackerras points out that doing the _raw_spin_trylock each time
#   through the loop will generate tons of unnecessary bus traffic.
#   Instead, after we fail to get the lock we should poll it with simple
#   loads until we see that it is clear and then retry the atomic op. 
#   Assuming a reasonable cache design, the loads won't generate any bus
#   traffic until another cpu writes to the cacheline containing the lock.
#   
#   Agreed.
#   
#   Signed-off-by: Ingo Molnar [EMAIL PROTECTED]
#   Signed-off-by: Linus Torvalds [EMAIL PROTECTED]
# 
# kernel/spinlock.c
#   2005/01/14 16:00:00-08:00 [EMAIL PROTECTED] +8 -6
#   Don't busy-lock-loop in preemptable spinlocks
# 
diff -Nru a/kernel/spinlock.c b/kernel/spinlock.c
--- a/kernel/spinlock.c 2005-01-16 21:43:15 -08:00
+++ b/kernel/spinlock.c 2005-01-16 21:43:15 -08:00
@@ -173,7 +173,7 @@
  * (We do this in a function because inlining it would be excessive.)
  */
 
-#define BUILD_LOCK_OPS(op, locktype)   \
+#define BUILD_LOCK_OPS(op, locktype, is_locked_fn) \
 void __lockfunc _##op##_lock(locktype *lock)   \
 {  \
preempt_disable();  \
@@ -183,7 +183,8 @@
preempt_enable();   \
if (!(lock)-break_lock)\
(lock)-break_lock = 1; \
-   cpu_relax();\
+   while (is_locked_fn(lock)  (lock)-break_lock)\
+   cpu_relax();\
preempt_disable();  \
}   \
 }  \
@@ -204,7 +205,8 @@
preempt_enable();   \
if (!(lock)-break_lock)\
(lock)-break_lock = 1; \
-   cpu_relax();\
+   while (is_locked_fn(lock)  (lock)-break_lock)\
+   cpu_relax();\
preempt_disable();  \
}   \
return flags;   \
@@ -244,9 +246,9 @@
  * _[spin|read|write]_lock_irqsave()
  * _[spin|read|write]_lock_bh()
  */
-BUILD_LOCK_OPS(spin, spinlock_t);
-BUILD_LOCK_OPS(read, rwlock_t);
-BUILD_LOCK_OPS(write, rwlock_t);
+BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
+BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
+BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
 
 #endif /* CONFIG_PREEMPT */
 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-16 Thread Andrew Morton
Chris Wedgwood [EMAIL PROTECTED] wrote:

 Linus,
 
 The change below is causing major problems for me on a dual K7 with
 CONFIG_PREEMPT enabled (cset -x and rebuilding makes the machine
 usable again).
 
 ...
 +BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
 +BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
 +BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);

If you replace the last line with

BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);

does it help?
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-16 Thread Chris Wedgwood
On Sun, Jan 16, 2005 at 11:09:22PM -0800, Andrew Morton wrote:

 If you replace the last line with

   BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);

 does it help?

Paul noticed that too so I came up with the patch below.

If it makes sense I can do the other architectures (I'm not sure == 0
is correct everywhere).  This is pretty much what I'm using now
without problems (it's either correct or it's almost correct and the
rwlock_is_write_locked hasn't thus far stuffed anything this boot).


---
Fix how we check for read and write rwlock_t locks.

Signed-off-by: Chris Wedgwood [EMAIL PROTECTED]

= include/asm-i386/spinlock.h 1.16 vs edited =
--- 1.16/include/asm-i386/spinlock.h2005-01-07 21:43:58 -08:00
+++ edited/include/asm-i386/spinlock.h  2005-01-16 23:23:50 -08:00
@@ -187,6 +187,7 @@ typedef struct {
 #define rwlock_init(x) do { *(x) = RW_LOCK_UNLOCKED; } while(0)
 
 #define rwlock_is_locked(x) ((x)-lock != RW_LOCK_BIAS)
+#define rwlock_is_write_locked(x) ((x)-lock == 0)
 
 /*
  * On x86, we implement read-write locks as a 32-bit counter
= kernel/spinlock.c 1.4 vs edited =
--- 1.4/kernel/spinlock.c   2005-01-14 16:00:00 -08:00
+++ edited/kernel/spinlock.c2005-01-16 23:25:11 -08:00
@@ -247,8 +247,8 @@ EXPORT_SYMBOL(_##op##_lock_bh)
  * _[spin|read|write]_lock_bh()
  */
 BUILD_LOCK_OPS(spin, spinlock_t, spin_is_locked);
-BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_locked);
-BUILD_LOCK_OPS(write, rwlock_t, spin_is_locked);
+BUILD_LOCK_OPS(read, rwlock_t, rwlock_is_write_locked);
+BUILD_LOCK_OPS(write, rwlock_t, rwlock_is_locked);
 
 #endif /* CONFIG_PREEMPT */
 
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: Horrible regression with -CURRENT from Don't busy-lock-loop in preemptable spinlocks patch

2005-01-16 Thread Paul Mackerras
Chris Wedgwood writes:

 +#define rwlock_is_write_locked(x) ((x)-lock == 0)

AFAICS on i386 the lock word, although it goes to 0 when write-locked,
can then go negative temporarily when another cpu tries to get a read
or write lock.  So I think this should be

((signed int)(x)-lock = 0)

(or the equivalent using atomic_read).

Paul.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/