>>> On Tue, Feb 26, 2008 at 1:06 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> On Tue 2008-02-26 08:03:43, Gregory Haskins wrote:
>> >>> On Mon, Feb 25, 2008 at 5:03 PM, in message
>> <[EMAIL PROTECTED]>, Pavel M
>>> On Mon, Feb 25, 2008 at 5:06 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
>
> I believe you have _way_ too many config variables. If this can be set
> at runtime, does it need a config option, too?
Generally speaking, I think until this algorithm has an
>>> On Mon, Feb 25, 2008 at 5:03 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
>> +static inline void
>> +prepare_adaptive_wait(struct rt_mutex *lock, struct adaptive_waiter
> *adaptive)
> ...
>> +#define prepare_adaptive_wait(lock, busy) {}
>
> This is evil. Use
On Mon, Feb 25, 2008 at 5:03 PM, in message
[EMAIL PROTECTED], Pavel Machek [EMAIL PROTECTED] wrote:
+static inline void
+prepare_adaptive_wait(struct rt_mutex *lock, struct adaptive_waiter
*adaptive)
...
+#define prepare_adaptive_wait(lock, busy) {}
This is evil. Use empty inline
On Mon, Feb 25, 2008 at 5:06 PM, in message
[EMAIL PROTECTED], Pavel Machek [EMAIL PROTECTED] wrote:
I believe you have _way_ too many config variables. If this can be set
at runtime, does it need a config option, too?
Generally speaking, I think until this algorithm has an
On Tue, Feb 26, 2008 at 1:06 PM, in message
[EMAIL PROTECTED], Pavel Machek [EMAIL PROTECTED] wrote:
On Tue 2008-02-26 08:03:43, Gregory Haskins wrote:
On Mon, Feb 25, 2008 at 5:03 PM, in message
[EMAIL PROTECTED], Pavel Machek [EMAIL PROTECTED] wrote:
+static inline void
>>> On Mon, Feb 25, 2008 at 5:57 PM, in message
<[EMAIL PROTECTED]>, Sven-Thorsten Dietrich
<[EMAIL PROTECTED]> wrote:
>
> But Greg may need to enforce it on his git tree that he mails these from
> - are you referring to anything specific in this patch?
>
Thats what I don't get. I *did*
>>> On Mon, Feb 25, 2008 at 5:09 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> From: Peter W.Morreale <[EMAIL PROTECTED]>
>>
>> This patch adds the adaptive spin lock busywait to rtmutexes. It adds
>> a new tunable: rtmutex_timeout, which is the
>>> On Mon, Feb 25, 2008 at 5:03 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> +/*
>> + * Adaptive-rtlocks will busywait when possible, and sleep only if
>> + * necessary. Note that the busyloop looks racy, and it isbut we do
>> + * not care. If we
>>> On Mon, Feb 25, 2008 at 4:54 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> @@ -720,7 +728,8 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
>> * saved_state accordingly. If we did not get a real wakeup
>> * then we return with the saved
From: Peter W. Morreale <[EMAIL PROTECTED]>
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter
From: Peter W.Morreale <[EMAIL PROTECTED]>
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 37
From: Peter W.Morreale <[EMAIL PROTECTED]>
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the
, and sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED
From: Sven Dietrich <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36
ior
with or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 45 --
From: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |7 ++-
kernel/sysctl.c | 14
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system "primed" with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]&
You can download this series here:
ftp://ftp.novell.com/dev/ghaskins/adaptive-locks-v2.tar.bz2
Changes since v1:
*) Rebased from 24-rt1 to 24.2-rt2
*) Dropped controversial (and likely unecessary) printk patch
*) Dropped (internally) controversial PREEMPT_SPINLOCK_WAITERS config options
*)
You can download this series here:
ftp://ftp.novell.com/dev/ghaskins/adaptive-locks-v2.tar.bz2
Changes since v1:
*) Rebased from 24-rt1 to 24.2-rt2
*) Dropped controversial (and likely unecessary) printk patch
*) Dropped (internally) controversial PREEMPT_SPINLOCK_WAITERS config options
*)
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system primed with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Kconfig.preempt
From: Sven-Thorsten Dietrich [EMAIL PROTECTED]
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich [EMAIL PROTECTED]
---
kernel/rtmutex.c |7 ++-
kernel/sysctl.c | 14 ++
2
with or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5 deletions
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Peter Morreale [EMAIL PROTECTED]
---
kernel/rtmutex.c | 45 -
1 files
, and sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Peter Morreale [EMAIL PROTECTED]
Signed-off-by: Sven Dietrich [EMAIL PROTECTED]
---
kernel/Kconfig.preempt
From: Sven Dietrich [EMAIL PROTECTED]
Signed-off-by: Sven Dietrich [EMAIL PROTECTED]
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36
From: Peter W.Morreale [EMAIL PROTECTED]
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale [EMAIL PROTECTED]
---
kernel/Kconfig.preempt| 37
From: Peter W.Morreale [EMAIL PROTECTED]
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the
From: Peter W. Morreale [EMAIL PROTECTED]
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter this
On Mon, Feb 25, 2008 at 4:54 PM, in message
[EMAIL PROTECTED], Pavel Machek [EMAIL PROTECTED] wrote:
Hi!
@@ -720,7 +728,8 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
* saved_state accordingly. If we did not get a real wakeup
* then we return with the saved state.
*/
On Mon, Feb 25, 2008 at 5:03 PM, in message
[EMAIL PROTECTED], Pavel Machek [EMAIL PROTECTED] wrote:
Hi!
+/*
+ * Adaptive-rtlocks will busywait when possible, and sleep only if
+ * necessary. Note that the busyloop looks racy, and it isbut we do
+ * not care. If we lose any races it
On Mon, Feb 25, 2008 at 5:09 PM, in message
[EMAIL PROTECTED], Pavel Machek [EMAIL PROTECTED] wrote:
Hi!
From: Peter W.Morreale [EMAIL PROTECTED]
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
On Mon, Feb 25, 2008 at 5:57 PM, in message
[EMAIL PROTECTED], Sven-Thorsten Dietrich
[EMAIL PROTECTED] wrote:
But Greg may need to enforce it on his git tree that he mails these from
- are you referring to anything specific in this patch?
Thats what I don't get. I *did* checkpatch all
Bill Huey (hui) wrote:
The might_sleep is annotation and well as a conditional preemption
point for the regular kernel. You might want to do a schedule check
there, but it's the wrong function if memory serves me correctly. It's
reserved for things that actually are design to sleep.
Note that
Bill Huey (hui) wrote:
The might_sleep is annotation and well as a conditional preemption
point for the regular kernel. You might want to do a schedule check
there, but it's the wrong function if memory serves me correctly. It's
reserved for things that actually are design to sleep.
Note that
Pavel Machek wrote:
Hi!
Decorate the printk path with an "unlikely()"
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143
Paul E. McKenney wrote:
Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.
Yeah, fully agree. This is on my research "todo" list. My theory is
that the
Gregory Haskins wrote:
@@ -732,14 +741,15 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
debug_rt_mutex_print_deadlock();
- schedule_rt_mutex(lock);
+ update_current(TASK_UNINTERRUPTIBLE, _state);
I have a question for everyone out there about this particular part
Gregory Haskins wrote:
@@ -732,14 +741,15 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
debug_rt_mutex_print_deadlock(waiter);
- schedule_rt_mutex(lock);
+ update_current(TASK_UNINTERRUPTIBLE, saved_state);
I have a question for everyone out there about this particular
Paul E. McKenney wrote:
Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.
Yeah, fully agree. This is on my research todo list. My theory is
that the
Pavel Machek wrote:
Hi!
Decorate the printk path with an unlikely()
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143..ebdaa17 100644
>>> On Thu, Feb 21, 2008 at 4:42 PM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> * Bill Huey (hui) <[EMAIL PROTECTED]> wrote:
>
>> I came to the original conclusion that it wasn't originally worth it,
>> but the dbench number published say otherwise. [...]
>
>
>>> On Thu, Feb 21, 2008 at 4:24 PM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> hm. Why is the ticket spinlock patch included in this patchset? It just
> skews your performance results unnecessarily. Ticket spinlocks are
> independent conceptually, they are
>>> On Thu, Feb 21, 2008 at 11:41 AM, in message <[EMAIL PROTECTED]>,
Andi Kleen <[EMAIL PROTECTED]> wrote:
>> +config RTLOCK_DELAY
>> +int "Default delay (in loops) for adaptive rtlocks"
>> +range 0 10
>> +depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of
>>> On Thu, Feb 21, 2008 at 11:36 AM, in message <[EMAIL PROTECTED]>,
Andi Kleen <[EMAIL PROTECTED]> wrote:
> On Thursday 21 February 2008 16:27:22 Gregory Haskins wrote:
>
>> @@ -660,12 +660,12 @@ rt_spin_lock_fastlock(struct rt_mutex *lock,
>>
>>> On Thu, Feb 21, 2008 at 10:26 AM, in message
<[EMAIL PROTECTED]>, Gregory Haskins
<[EMAIL PROTECTED]> wrote:
> We have put together some data from different types of benchmarks for
> this patch series, which you can find here:
>
> ftp://ftp.novell.co
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system "primed" with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]&
From: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8 ++--
kernel/sysctl.c | 14
Decorate the printk path with an "unlikely()"
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143..ebdaa17 100644
--- a/kernel
From: Peter W. Morreale <[EMAIL PROTECTED]>
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter
From: Peter W.Morreale <[EMAIL PROTECTED]>
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 37
From: Peter W.Morreale <[EMAIL PROTECTED]>
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the
, and sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED
From: Sven Dietrich <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 23 ++-
1 files changed, 1
ior
with or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5
From: Nick Piggin <[EMAIL PROTECTED]>
Introduce ticket lock spinlocks for x86 which are FIFO. The implementation
is described in the comments. The straight-line lock/unlock instruction
sequence is slightly slower than the dec based locks on modern x86 CPUs,
however the difference is quite small
Preemptible spinlock waiters effectively bypasses the benefits of a fifo
spinlock. Since we now have fifo spinlocks for x86 enabled, disable the
preemption feature on x86.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Nick Piggin <[EMAIL PROTECTED]>
---
arch/x86/Kconfig
We introduce a configuration variable for the feature to make it easier for
various architectures and/or configs to enable or disable it based on their
requirements.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt |9 +
kernel/spinlock.c
The Real Time patches to the Linux kernel converts the architecture
specific SMP-synchronization primitives commonly referred to as
"spinlocks" to an "RT mutex" implementation that support a priority
inheritance protocol, and priority-ordered wait queues. The RT mutex
implementation allows tasks
The logic is currently broken so that PREEMPT_RT disables preemptible
spinlock waiters, which is counter intuitive.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/spinlock.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/spinlock.c b/
We introduce a configuration variable for the feature to make it easier for
various architectures and/or configs to enable or disable it based on their
requirements.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Kconfig.preempt |9 +
kernel/spinlock.c |7
From: Nick Piggin [EMAIL PROTECTED]
Introduce ticket lock spinlocks for x86 which are FIFO. The implementation
is described in the comments. The straight-line lock/unlock instruction
sequence is slightly slower than the dec based locks on modern x86 CPUs,
however the difference is quite small on
Preemptible spinlock waiters effectively bypasses the benefits of a fifo
spinlock. Since we now have fifo spinlocks for x86 enabled, disable the
preemption feature on x86.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Nick Piggin [EMAIL PROTECTED]
---
arch/x86/Kconfig |1 +
1 files
with or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5 deletions
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Peter Morreale [EMAIL PROTECTED]
---
kernel/rtmutex.c | 23 ++-
1 files changed, 18 insertions
, and sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Peter Morreale [EMAIL PROTECTED]
Signed-off-by: Sven Dietrich [EMAIL PROTECTED]
---
kernel/Kconfig.preempt
From: Peter W.Morreale [EMAIL PROTECTED]
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale [EMAIL PROTECTED]
---
kernel/Kconfig.preempt| 37
From: Peter W.Morreale [EMAIL PROTECTED]
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the
Decorate the printk path with an unlikely()
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143..ebdaa17 100644
--- a/kernel/rtmutex.c
+++ b/kernel
From: Peter W. Morreale [EMAIL PROTECTED]
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter this
From: Sven Dietrich [EMAIL PROTECTED]
Signed-off-by: Sven Dietrich [EMAIL PROTECTED]
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system primed with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Kconfig.preempt
From: Sven-Thorsten Dietrich [EMAIL PROTECTED]
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich [EMAIL PROTECTED]
---
kernel/rtmutex.c |8 ++--
kernel/sysctl.c | 14 ++
The Real Time patches to the Linux kernel converts the architecture
specific SMP-synchronization primitives commonly referred to as
spinlocks to an RT mutex implementation that support a priority
inheritance protocol, and priority-ordered wait queues. The RT mutex
implementation allows tasks that
The logic is currently broken so that PREEMPT_RT disables preemptible
spinlock waiters, which is counter intuitive.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/spinlock.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/spinlock.c b/kernel
On Thu, Feb 21, 2008 at 10:26 AM, in message
[EMAIL PROTECTED], Gregory Haskins
[EMAIL PROTECTED] wrote:
We have put together some data from different types of benchmarks for
this patch series, which you can find here:
ftp://ftp.novell.com/dev/ghaskins/adaptive-locks.pdf
For convenience
On Thu, Feb 21, 2008 at 11:36 AM, in message [EMAIL PROTECTED],
Andi Kleen [EMAIL PROTECTED] wrote:
On Thursday 21 February 2008 16:27:22 Gregory Haskins wrote:
@@ -660,12 +660,12 @@ rt_spin_lock_fastlock(struct rt_mutex *lock,
void fastcall (*slowfn)(struct rt_mutex *lock
On Thu, Feb 21, 2008 at 11:41 AM, in message [EMAIL PROTECTED],
Andi Kleen [EMAIL PROTECTED] wrote:
+config RTLOCK_DELAY
+int Default delay (in loops) for adaptive rtlocks
+range 0 10
+depends on ADAPTIVE_RTLOCK
I must say I'm not a big fan of putting such subtle
On Thu, Feb 21, 2008 at 4:24 PM, in message [EMAIL PROTECTED],
Ingo Molnar [EMAIL PROTECTED] wrote:
hm. Why is the ticket spinlock patch included in this patchset? It just
skews your performance results unnecessarily. Ticket spinlocks are
independent conceptually, they are already
On Thu, Feb 21, 2008 at 4:42 PM, in message [EMAIL PROTECTED],
Ingo Molnar [EMAIL PROTECTED] wrote:
* Bill Huey (hui) [EMAIL PROTECTED] wrote:
I came to the original conclusion that it wasn't originally worth it,
but the dbench number published say otherwise. [...]
dbench is a
Peter Zijlstra wrote:
On Fri, 2008-02-15 at 11:46 -0500, Gregory Haskins wrote:
but perhaps you can convince me that it is not needed?
(i.e. I am still not understanding how the timer guarantees the stability).
ok, let me try again.
So we take rq->lock, at this point we know
Peter Zijlstra wrote:
On Fri, 2008-02-15 at 11:46 -0500, Gregory Haskins wrote:
but perhaps you can convince me that it is not needed?
(i.e. I am still not understanding how the timer guarantees the stability).
ok, let me try again.
So we take rq-lock, at this point we know rd
<[EMAIL PROTECTED]>
CC: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 106
kernel/sched_fair.c |2
2 files changed, 59 insertions(+), 49 deletions(-)
Index: linux-2.6/ke
[EMAIL PROTECTED]
CC: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 106
kernel/sched_fair.c |2
2 files changed, 59 insertions(+), 49 deletions(-)
Index: linux-2.6/kernel/sched.c
>>> On Thu, Feb 14, 2008 at 1:15 PM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Peter wrote of:
>> the lack of rd->load_balance.
>
> Could you explain to me a bit what that means?
>
> Does this mean that the existing code would, by default (default being
> a
>>> On Thu, Feb 14, 2008 at 10:57 AM, in message
<[EMAIL PROTECTED]>, Peter Zijlstra <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> Here the current patches that rework load_balance_monitor.
>
> The main reason for doing this is to eliminate the wakeups the thing
> generates,
> esp. on an idle system.
On Thu, Feb 14, 2008 at 10:57 AM, in message
[EMAIL PROTECTED], Peter Zijlstra [EMAIL PROTECTED]
wrote:
Hi,
Here the current patches that rework load_balance_monitor.
The main reason for doing this is to eliminate the wakeups the thing
generates,
esp. on an idle system. The bonus is
On Thu, Feb 14, 2008 at 1:15 PM, in message
[EMAIL PROTECTED], Paul Jackson [EMAIL PROTECTED] wrote:
Peter wrote of:
the lack of rd-load_balance.
Could you explain to me a bit what that means?
Does this mean that the existing code would, by default (default being
a single sched domain,
>>> On Tue, Feb 12, 2008 at 2:22 PM, in message
<[EMAIL PROTECTED]>, Steven Rostedt
<[EMAIL PROTECTED]> wrote:
> On Tue, 12 Feb 2008, Gregory Haskins wrote:
>
>> This patch adds a new critical-section primitive pair:
>>
>> "migration_disabl
cept will be used later in the series.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
include/linux/init_task.h |1 +
include/linux/sched.h |8 +
kernel/fork.c |1 +
kernel/sched.c| 70 -
kernel/sche
Hi Ingo, Steven,
I had been working on some ideas related to saving context switches in the
bottom-half mechanisms on -rt. So far, the ideas have been a flop, but a few
peripheral technologies did come out of it. This series is one such
idea that I thought might have some merit on its own. The
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/kthread.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index dcfe724..b193b47 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -170,6 +170,7 @@ void kthrea
.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/init_task.h |1 +
include/linux/sched.h |8 +
kernel/fork.c |1 +
kernel/sched.c| 70 -
kernel/sched_rt.c |6 +++-
5 files
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/kthread.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index dcfe724..b193b47 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -170,6 +170,7 @@ void kthread_bind
On Tue, Feb 12, 2008 at 2:22 PM, in message
[EMAIL PROTECTED], Steven Rostedt
[EMAIL PROTECTED] wrote:
On Tue, 12 Feb 2008, Gregory Haskins wrote:
This patch adds a new critical-section primitive pair:
migration_disable() and migration_enable()
This is similar to what Mathieu once
Hi Ingo, Steven,
I had been working on some ideas related to saving context switches in the
bottom-half mechanisms on -rt. So far, the ideas have been a flop, but a few
peripheral technologies did come out of it. This series is one such
idea that I thought might have some merit on its own. The
Pavel Machek wrote:
Hi!
Are there any recent changes in cpu hotplug? I have suspend (random)
problems, nosmp seems to fix it, and last messages in the "it hangs"
case are from cpu hotplug...
Can you send along your cpuinfo?
It happened on more than one machine, one
Pavel Machek wrote:
Hi!
Are there any recent changes in cpu hotplug? I have suspend (random)
problems, nosmp seems to fix it, and last messages in the "it hangs"
case are from cpu hotplug...
Pavel
Hi Pavel,
Can you send
1 - 100 of 583 matches
Mail list logo