On Tue, Jun 11, 2013 at 12:20:32PM -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 11:57 -0400, Waiman Long wrote:
This is an interesting patch and I think it is useful for workloads that
run on systems with a large number of CPUs.
I would say it is definitely a fun academic patch,
On Tue, Jun 11, 2013 at 11:57:14AM -0400, Waiman Long wrote:
On 06/09/2013 03:36 PM, Paul E. McKenney wrote:
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between light-weight ticket locks
On Tue, Jun 11, 2013 at 11:22:45AM -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 03:14 -0700, Paul E. McKenney wrote:
Off-topic, although I am in this community for several years,
I am not exactly clear with this problem.
1) In general case, which lock is the most
On Tue, Jun 11, 2013 at 11:10:30PM +0800, Lai Jiangshan wrote:
On Tue, Jun 11, 2013 at 10:48 PM, Lai Jiangshan eag0...@gmail.com wrote:
On Mon, Jun 10, 2013 at 3:36 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
Breaking up locks is better than implementing high-contention locks,
On Tue, 2013-06-11 at 09:36 -0700, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
a table size of 256. Maybe it is better to dynamically allocate the
table at init time
On Tue, 2013-06-11 at 09:43 -0700, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself. RHEL6,
for example, had defined CONFIG_NR_CPUS to be 4096 which mean a table
size of 256. Maybe it is better to dynamically allocate the table at
init time
On Tue, Jun 11, 2013 at 9:48 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
Another approach is to permanently associate queues with each lock,
but that increases the size of the lock -- something that has raised
concerns in the past. But if adding 32 bytes to each ticketlock was OK,
On Tue, Jun 11, 2013 at 01:01:55PM -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 09:36 -0700, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
a table size of 256. Maybe
On Tue, Jun 11, 2013 at 10:17:52AM -0700, Linus Torvalds wrote:
On Tue, Jun 11, 2013 at 9:48 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
Another approach is to permanently associate queues with each lock,
but that increases the size of the lock -- something that has raised
On 06/11/2013 12:20 PM, Steven Rostedt wrote:
diff --git a/arch/x86/include/asm/spinlock_types.h
b/arch/x86/include/asm/spinlock_types.h
index ad0ad07..cdaefdd 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -7,12 +7,18 @@
On Tue, Jun 11, 2013 at 01:13:53PM -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 09:43 -0700, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6,
for example, had defined CONFIG_NR_CPUS to be 4096 which mean a table
size of
On Mon, 2013-06-10 at 17:51 -0700, Linus Torvalds wrote:
On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt rost...@goodmis.org wrote:
OK, I haven't found a issue here yet, but youss are beiing trickssy! We
don't like trickssy, and we must find preiouss!!!
.. and I personally have my
On Tue, Jun 11, 2013 at 10:53:06AM -0700, Davidlohr Bueso wrote:
On Mon, 2013-06-10 at 17:51 -0700, Linus Torvalds wrote:
On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt rost...@goodmis.org wrote:
OK, I haven't found a issue here yet, but youss are beiing trickssy! We
don't like
On Tue, 2013-06-11 at 10:53 -0700, Davidlohr Bueso wrote:
I hate to be the bearer of bad news but I got some pretty bad aim7
performance numbers with this patch on an 8-socket (80 core) 256 Gb
memory DL980 box against a vanilla 3.10-rc4 kernel:
This doesn't surprise me as the spin lock now
On Tue, 2013-06-11 at 14:10 -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 10:53 -0700, Davidlohr Bueso wrote:
I hate to be the bearer of bad news but I got some pretty bad aim7
performance numbers with this patch on an 8-socket (80 core) 256 Gb
memory DL980 box against a vanilla
On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
a table size of 256. Maybe it is better to dynamically allocate the
table at init time depending on the actual
On Tue, Jun 11, 2013 at 02:10:31PM -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 10:53 -0700, Davidlohr Bueso wrote:
I hate to be the bearer of bad news but I got some pretty bad aim7
performance numbers with this patch on an 8-socket (80 core) 256 Gb
memory DL980 box against a
On Tue, 2013-06-11 at 14:41 -0400, Waiman Long wrote:
On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
a table size of 256. Maybe it is better to
On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined CONFIG_NR_CPUS to be 4096 which mean
a table size of 256. Maybe it is better to
On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
+bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc)
+{
+ if (unlikely(inc.head 0x1)) {
+
+ /* This lock has a queue, so go spin on the queue. */
+ if (tkt_q_do_spin(ap, inc))
+
On Tue, Jun 11, 2013 at 12:49 PM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined
On Tue, Jun 11, 2013 at 04:09:56PM -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
+bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc)
+{
+ if (unlikely(inc.head 0x1)) {
+
+ /* This lock has a queue, so go spin on the
On Tue, Jun 11, 2013 at 01:25:15PM -0700, Jason Low wrote:
On Tue, Jun 11, 2013 at 12:49 PM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
I am a bit concern about the size
On Tue, 2013-06-11 at 13:32 -0700, Paul E. McKenney wrote:
/*
* This lock has lots of spinners, but no queue. Go create
* a queue to spin on.
*
* In the common case, only the single task that
* sees
On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
+config TICKET_LOCK_QUEUED
+ bool Dynamically switch between ticket and queued locking
+ depends on SMP
+ default n
+ ---help---
+ Enable dynamic switching between ticketlock and queued locking
+ on a
On Tue, Jun 11, 2013 at 04:56:50PM -0400, Steven Rostedt wrote:
On Tue, 2013-06-11 at 12:49 -0700, Paul E. McKenney wrote:
+config TICKET_LOCK_QUEUED
+ bool Dynamically switch between ticket and queued locking
+ depends on SMP
+ default n
+ ---help---
+ Enable dynamic
On Wed, Jun 12, 2013 at 3:49 AM, Paul E. McKenney
paul...@linux.vnet.ibm.com wrote:
On Tue, Jun 11, 2013 at 02:41:59PM -0400, Waiman Long wrote:
On 06/11/2013 12:36 PM, Paul E. McKenney wrote:
I am a bit concern about the size of the head queue table itself.
RHEL6, for example, had defined
On Wed, 2013-06-12 at 09:19 +0800, Lai Jiangshan wrote:
+
+/*
+ * Hand the lock off to the first CPU on the queue.
+ */
+void tkt_q_do_wake(arch_spinlock_t *lock)
+{
+ struct tkt_q_head *tqhp;
+ struct tkt_q *tqp;
+
+ /* If the queue is still being set up,
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> Breaking up locks is better than implementing high-contention locks, but
> if we must have high-contention locks, why not make them automatically
> switch between light-weight ticket locks at low contention and queued
> locks at high
On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt wrote:
>
> OK, I haven't found a issue here yet, but youss are beiing trickssy! We
> don't like trickssy, and we must find preiouss!!!
.. and I personally have my usual reservations. I absolutely hate
papering over scalability issues, and
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> +#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
> +
> +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> +
> +static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> +{
> + register struct
On Mon, Jun 10, 2013 at 07:02:56PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
>
> > +/*
> > + * Return a pointer to the queue header associated with the specified lock,
> > + * or return NULL if there is no queue for the lock or if the lock's queue
>
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> +/*
> + * Return a pointer to the queue header associated with the specified lock,
> + * or return NULL if there is no queue for the lock or if the lock's queue
> + * is in transition.
> + */
> +static struct tkt_q_head
On Mon, Jun 10, 2013 at 02:35:06PM -0700, Eric Dumazet wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> > Breaking up locks is better than implementing high-contention locks, but
> > if we must have high-contention locks, why not make them automatically
> > switch between
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> Breaking up locks is better than implementing high-contention locks, but
> if we must have high-contention locks, why not make them automatically
> switch between light-weight ticket locks at low contention and queued
> locks at high
On Mon, Jun 10, 2013 at 05:08:25PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> >
> > +#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
> > +
> > +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> > +
> > +static __always_inline void
On Mon, Jun 10, 2013 at 11:01:50PM +0200, Thomas Gleixner wrote:
> On Mon, 10 Jun 2013, Steven Rostedt wrote:
> > On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> > > +#ifdef __BIG_ENDIAN__
> >
> > Is there such a thing as a BIG_ENDIAN x86 box? This is in
> >
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
>
> +#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
> +
> +bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
> +
> +static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
> +{
> + register struct
On Mon, 10 Jun 2013, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> > +#ifdef __BIG_ENDIAN__
>
> Is there such a thing as a BIG_ENDIAN x86 box? This is in
> arch/x86/include/asm/spinlock_types.h
That's just an habit for people who have been forced to deal
On Mon, Jun 10, 2013 at 04:47:58PM -0400, Steven Rostedt wrote:
> On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
>
> > --- a/arch/x86/include/asm/spinlock_types.h
> > +++ b/arch/x86/include/asm/spinlock_types.h
> > @@ -7,12 +7,18 @@
> >
> > #include
> >
> > -#if (CONFIG_NR_CPUS
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
> --- a/arch/x86/include/asm/spinlock_types.h
> +++ b/arch/x86/include/asm/spinlock_types.h
> @@ -7,12 +7,18 @@
>
> #include
>
> -#if (CONFIG_NR_CPUS < 256)
> +#if (CONFIG_NR_CPUS < 128)
> typedef u8 __ticket_t;
> typedef u16
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -7,12 +7,18 @@
#include linux/types.h
-#if (CONFIG_NR_CPUS 256)
+#if (CONFIG_NR_CPUS 128)
typedef u8 __ticket_t;
typedef u16
On Mon, Jun 10, 2013 at 04:47:58PM -0400, Steven Rostedt wrote:
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -7,12 +7,18 @@
#include linux/types.h
-#if (CONFIG_NR_CPUS
On Mon, 10 Jun 2013, Steven Rostedt wrote:
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
+#ifdef __BIG_ENDIAN__
Is there such a thing as a BIG_ENDIAN x86 box? This is in
arch/x86/include/asm/spinlock_types.h
That's just an habit for people who have been forced to deal with BE
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
+#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
+
+bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
+
+static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+{
+ register struct __raw_tickets
On Mon, Jun 10, 2013 at 11:01:50PM +0200, Thomas Gleixner wrote:
On Mon, 10 Jun 2013, Steven Rostedt wrote:
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
+#ifdef __BIG_ENDIAN__
Is there such a thing as a BIG_ENDIAN x86 box? This is in
arch/x86/include/asm/spinlock_types.h
On Mon, Jun 10, 2013 at 05:08:25PM -0400, Steven Rostedt wrote:
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
+#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
+
+bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
+
+static __always_inline void
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between light-weight ticket locks at low contention and queued
locks at high
On Mon, Jun 10, 2013 at 02:35:06PM -0700, Eric Dumazet wrote:
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
+/*
+ * Return a pointer to the queue header associated with the specified lock,
+ * or return NULL if there is no queue for the lock or if the lock's queue
+ * is in transition.
+ */
+static struct tkt_q_head
On Mon, Jun 10, 2013 at 07:02:56PM -0400, Steven Rostedt wrote:
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
+/*
+ * Return a pointer to the queue header associated with the specified lock,
+ * or return NULL if there is no queue for the lock or if the lock's queue
+ * is
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
+#else /* #ifndef CONFIG_TICKET_LOCK_QUEUED */
+
+bool tkt_spin_pass(arch_spinlock_t *ap, struct __raw_tickets inc);
+
+static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
+{
+ register struct __raw_tickets
On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt rost...@goodmis.org wrote:
OK, I haven't found a issue here yet, but youss are beiing trickssy! We
don't like trickssy, and we must find preiouss!!!
.. and I personally have my usual reservations. I absolutely hate
papering over scalability
On Sun, 2013-06-09 at 12:36 -0700, Paul E. McKenney wrote:
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between light-weight ticket locks at low contention and queued
locks at high
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between light-weight ticket locks at low contention and queued
locks at high contention?
This commit therefore allows ticket locks to
Breaking up locks is better than implementing high-contention locks, but
if we must have high-contention locks, why not make them automatically
switch between light-weight ticket locks at low contention and queued
locks at high contention?
This commit therefore allows ticket locks to
101 - 156 of 156 matches
Mail list logo