>>> On Tue, Feb 26, 2008 at 1:06 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> On Tue 2008-02-26 08:03:43, Gregory Haskins wrote:
>> >>> On Mon, Feb 25, 2008 at 5:03 PM, in message
>> <[EMAIL PROTECTED]>, Pavel M
>>> On Mon, Feb 25, 2008 at 5:06 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
>
> I believe you have _way_ too many config variables. If this can be set
> at runtime, does it need a config option, too?
Generally speaking, I think until this algorithm has an adapti
>>> On Mon, Feb 25, 2008 at 5:03 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
>> +static inline void
>> +prepare_adaptive_wait(struct rt_mutex *lock, struct adaptive_waiter
> *adaptive)
> ...
>> +#define prepare_adaptive_wait(lock, busy) {}
>
> This is evil. Use
>>> On Mon, Feb 25, 2008 at 5:57 PM, in message
<[EMAIL PROTECTED]>, Sven-Thorsten Dietrich
<[EMAIL PROTECTED]> wrote:
>
> But Greg may need to enforce it on his git tree that he mails these from
> - are you referring to anything specific in this patch?
>
Thats what I don't get. I *did* checkp
>>> On Mon, Feb 25, 2008 at 5:09 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> From: Peter W.Morreale <[EMAIL PROTECTED]>
>>
>> This patch adds the adaptive spin lock busywait to rtmutexes. It adds
>> a new tunable: rtmutex_timeout, which is the compani
>>> On Mon, Feb 25, 2008 at 5:03 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> +/*
>> + * Adaptive-rtlocks will busywait when possible, and sleep only if
>> + * necessary. Note that the busyloop looks racy, and it isbut we do
>> + * not care. If we lo
>>> On Mon, Feb 25, 2008 at 4:54 PM, in message
<[EMAIL PROTECTED]>, Pavel Machek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>> @@ -720,7 +728,8 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
>> * saved_state accordingly. If we did not get a real wakeup
>> * then we return with the saved st
From: Peter W. Morreale <[EMAIL PROTECTED]>
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter thi
From: Peter W.Morreale <[EMAIL PROTECTED]>
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 37 +
From: Peter W.Morreale <[EMAIL PROTECTED]>
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the pi_
sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED
From: Sven Dietrich <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36 inser
or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 45 --
From: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |7 ++-
kernel/sysctl.c | 14 ++
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system "primed" with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]&
You can download this series here:
ftp://ftp.novell.com/dev/ghaskins/adaptive-locks-v2.tar.bz2
Changes since v1:
*) Rebased from 24-rt1 to 24.2-rt2
*) Dropped controversial (and likely unecessary) printk patch
*) Dropped (internally) controversial PREEMPT_SPINLOCK_WAITERS config options
*) Incor
Bill Huey (hui) wrote:
The might_sleep is annotation and well as a conditional preemption
point for the regular kernel. You might want to do a schedule check
there, but it's the wrong function if memory serves me correctly. It's
reserved for things that actually are design to sleep.
Note that
Pavel Machek wrote:
Hi!
Decorate the printk path with an "unlikely()"
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143
Paul E. McKenney wrote:
Governing the timeout by context-switch overhead sounds even better to me.
Really easy to calibrate, and short critical sections are of much shorter
duration than are a context-switch pair.
Yeah, fully agree. This is on my research "todo" list. My theory is
that the u
Gregory Haskins wrote:
@@ -732,14 +741,15 @@ rt_spin_lock_slowlock(struct rt_mutex *lock)
debug_rt_mutex_print_deadlock(&waiter);
- schedule_rt_mutex(lock);
+ update_current(TASK_UNINTERRUPTIBLE, &saved_state);
I have a question for everyone out there ab
>>> On Thu, Feb 21, 2008 at 4:42 PM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> * Bill Huey (hui) <[EMAIL PROTECTED]> wrote:
>
>> I came to the original conclusion that it wasn't originally worth it,
>> but the dbench number published say otherwise. [...]
>
> dbe
>>> On Thu, Feb 21, 2008 at 4:24 PM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> hm. Why is the ticket spinlock patch included in this patchset? It just
> skews your performance results unnecessarily. Ticket spinlocks are
> independent conceptually, they are alread
>>> On Thu, Feb 21, 2008 at 11:41 AM, in message <[EMAIL PROTECTED]>,
Andi Kleen <[EMAIL PROTECTED]> wrote:
>> +config RTLOCK_DELAY
>> +int "Default delay (in loops) for adaptive rtlocks"
>> +range 0 10
>> +depends on ADAPTIVE_RTLOCK
>
> I must say I'm not a big fan of puttin
>>> On Thu, Feb 21, 2008 at 11:36 AM, in message <[EMAIL PROTECTED]>,
Andi Kleen <[EMAIL PROTECTED]> wrote:
> On Thursday 21 February 2008 16:27:22 Gregory Haskins wrote:
>
>> @@ -660,12 +660,12 @@ rt_spin_lock_fastlock(struct rt_mutex *lock,
>>
>>> On Thu, Feb 21, 2008 at 10:26 AM, in message
<[EMAIL PROTECTED]>, Gregory Haskins
<[EMAIL PROTECTED]> wrote:
> We have put together some data from different types of benchmarks for
> this patch series, which you can find here:
>
> ftp://ftp.novell.com/dev
. tasks that the
scheduler picked to run first have a logically higher priority amoung tasks
of the same prio). This helps to keep the system "primed" with tasks doing
useful work, and the end result is higher throughput.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]&
From: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
Add /proc/sys/kernel/lateral_steal, to allow switching on and off
equal-priority mutex stealing between threads.
Signed-off-by: Sven-Thorsten Dietrich <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8 ++--
kernel/sysctl.c | 14 +
Decorate the printk path with an "unlikely()"
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/rtmutex.c b/kernel/rtmutex.c
index 122f143..ebdaa17 100644
--- a/kernel
From: Peter W. Morreale <[EMAIL PROTECTED]>
Remove the redundant attempt to get the lock. While it is true that the
exit path with this patch adds an un-necessary xchg (in the event the
lock is granted without further traversal in the loop) experimentation
shows that we almost never encounter thi
From: Peter W.Morreale <[EMAIL PROTECTED]>
This patch adds the adaptive spin lock busywait to rtmutexes. It adds
a new tunable: rtmutex_timeout, which is the companion to the
rtlock_timeout tunable.
Signed-off-by: Peter W. Morreale <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 37 +
From: Peter W.Morreale <[EMAIL PROTECTED]>
In wakeup_next_waiter(), we take the pi_lock, and then find out whether
we have another waiter to add to the pending owner. We can reduce
contention on the pi_lock for the pending owner if we first obtain the
pointer to the next waiter outside of the pi_
sleep when necessary (to avoid deadlock, etc).
This significantly improves many areas of the performance of the -rt
kernel.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED
From: Sven Dietrich <[EMAIL PROTECTED]>
Signed-off-by: Sven Dietrich <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt| 11 +++
kernel/rtmutex.c |4
kernel/rtmutex_adaptive.h | 11 +--
kernel/sysctl.c | 12
4 files changed, 36 inser
It is redundant to wake the grantee task if it is already running
Credit goes to Peter for the general idea.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Peter Morreale <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 23 ++-
1 files changed, 1
or without the adaptive features that are added later in the series.
We add it here as a separate patch for greater review clarity on smaller
changes.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/rtmutex.c | 20 +++-
1 files changed, 15 insertions(+), 5
From: Nick Piggin <[EMAIL PROTECTED]>
Introduce ticket lock spinlocks for x86 which are FIFO. The implementation
is described in the comments. The straight-line lock/unlock instruction
sequence is slightly slower than the dec based locks on modern x86 CPUs,
however the difference is quite small on
Preemptible spinlock waiters effectively bypasses the benefits of a fifo
spinlock. Since we now have fifo spinlocks for x86 enabled, disable the
preemption feature on x86.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Nick Piggin <[EMAIL PROTECTED]>
---
arch/x86/Kconfig
We introduce a configuration variable for the feature to make it easier for
various architectures and/or configs to enable or disable it based on their
requirements.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/Kconfig.preempt |9 +
kernel/spinlock.c
The Real Time patches to the Linux kernel converts the architecture
specific SMP-synchronization primitives commonly referred to as
"spinlocks" to an "RT mutex" implementation that support a priority
inheritance protocol, and priority-ordered wait queues. The RT mutex
implementation allows tasks t
The logic is currently broken so that PREEMPT_RT disables preemptible
spinlock waiters, which is counter intuitive.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/spinlock.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/spinlock.c b/
Peter Zijlstra wrote:
On Fri, 2008-02-15 at 11:46 -0500, Gregory Haskins wrote:
but perhaps you can convince me that it is not needed?
(i.e. I am still not understanding how the timer guarantees the stability).
ok, let me try again.
So we take rq->lock, at this point we know rd
lstra <[EMAIL PROTECTED]>
CC: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c | 106
kernel/sched_fair.c |2
2 files changed, 59 insertions(+), 49 deletions(-)
Index: linux-2
>>> On Thu, Feb 14, 2008 at 1:15 PM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Peter wrote of:
>> the lack of rd->load_balance.
>
> Could you explain to me a bit what that means?
>
> Does this mean that the existing code would, by default (default being
> a singl
>>> On Thu, Feb 14, 2008 at 10:57 AM, in message
<[EMAIL PROTECTED]>, Peter Zijlstra <[EMAIL PROTECTED]>
wrote:
> Hi,
>
> Here the current patches that rework load_balance_monitor.
>
> The main reason for doing this is to eliminate the wakeups the thing
> generates,
> esp. on an idle system. Th
>>> On Tue, Feb 12, 2008 at 2:22 PM, in message
<[EMAIL PROTECTED]>, Steven Rostedt
<[EMAIL PROTECTED]> wrote:
> On Tue, 12 Feb 2008, Gregory Haskins wrote:
>
>> This patch adds a new critical-section primitive pair:
>>
>> "migration_disable()
ll be used later in the series.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
include/linux/init_task.h |1 +
include/linux/sched.h |8 +
kernel/fork.c |1 +
kernel/sched.c| 70 -
kernel/sche
Hi Ingo, Steven,
I had been working on some ideas related to saving context switches in the
bottom-half mechanisms on -rt. So far, the ideas have been a flop, but a few
peripheral technologies did come out of it. This series is one such
idea that I thought might have some merit on its own. The
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/kthread.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/kthread.c b/kernel/kthread.c
index dcfe724..b193b47 100644
--- a/kernel/kthread.c
+++ b/kernel/kthread.c
@@ -170,6 +170,7 @@ void kthrea
Pavel Machek wrote:
Hi!
Are there any recent changes in cpu hotplug? I have suspend (random)
problems, nosmp seems to fix it, and last messages in the "it hangs"
case are from cpu hotplug...
Can you send along your cpuinfo?
It happened on more than one machine, one cpui
Pavel Machek wrote:
Hi!
Are there any recent changes in cpu hotplug? I have suspend (random)
problems, nosmp seems to fix it, and last messages in the "it hangs"
case are from cpu hotplug...
Pavel
Hi Pavel,
Can you send
>>> On Tue, Feb 5, 2008 at 4:58 PM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Tue, Feb 05, 2008 at 11:25:18AM -0700, Gregory Haskins wrote:
>> @@ -6241,7 +6242,7 @@ static void rq_attach_root(struct rq
; protection of the run queue spinlock .. So you could just move the kfree
> down below the spin_unlock_irqrestore() ..
Here is a new version to address your observation:
---
we cannot kfree while in_atomic()
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
diff --git
>>> On Tue, Feb 5, 2008 at 11:59 AM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 04, 2008 at 10:02:12PM -0700, Gregory Haskins wrote:
>> >>> On Mon, Feb 4, 2008 at 9:51 PM, in message
>> <[EMAIL PROTECT
>>> On Mon, Feb 4, 2008 at 9:51 PM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 04, 2008 at 03:35:13PM -0800, Max Krasnyanskiy wrote:
[snip]
>>
>> Also the first thing I tried was to bring CPU1 off-line. Thats the fastest
>> way to get irqs, soft-irqs
:1 [0001], irqs_disabled():1
Hi Daniel,
Can you try this patch and let me know if it fixes your problem?
---
use rcu for root-domain kfree
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
diff --git a/kernel/sched.c b/kernel/sched.c
index e6ad493..77e86c1 100644
Hi Daniel,
See inline...
>>> On Mon, Feb 4, 2008 at 9:51 PM, in message
<[EMAIL PROTECTED]>, Daniel Walker
<[EMAIL PROTECTED]> wrote:
> On Mon, Feb 04, 2008 at 03:35:13PM -0800, Max Krasnyanskiy wrote:
>> This is just an FYI. As part of the "Isolated CPU extensions" thread Daniel
> suggest f
>>> On Tue, Jan 29, 2008 at 4:02 PM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Gregory wrote:
>> > ... (1) turning off
>> > sched_load_balance in any overlapping cpusets, including all
>> > encompassing parent cpusets, (2) leaving sched_load_balance on in the
>> >
>>> On Tue, Jan 29, 2008 at 3:56 PM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Gregory wrote:
>> By moving it into the root_domain structure, there is now an instance
>> per (um, for lack of a better, more up to date word) "exclusive"
>> cpuset. That way, dispara
>>> On Tue, Jan 29, 2008 at 2:04 PM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Gregory wrote:
>> IMHO it works well the way it is: The user selects the class for a
>> particular task using sched_setscheduler(), and they select the cpuset
>> (or inherit it) that de
>>> On Tue, Jan 29, 2008 at 2:37 PM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Gregory wrote:
>> > 1) What are 'per-domain' variables?
>>
>> s/per-domain/per-root-domain
>
> Oh dear - now I've got more questions, not fewer.
>
> 1) "variables" ... what variable
>>> On Tue, Jan 29, 2008 at 11:51 AM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Gregory wrote:
>> This is correct. We have the balance policy polymorphically associated
>> with each sched_class, and the CFS load-balancer and RT "load" (really,
>> priority) balancer
>>> On Tue, Jan 29, 2008 at 11:28 AM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Gregory wrote:
>> I am a bit confused as to why you disable load-balancing in the
>> RT cpuset? It shouldn't be strictly necessary in order for the
>> RT scheduler to do its job (
>>> On Tue, Jan 29, 2008 at 7:12 AM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Peter, replying to Paul:
>> > 3) you turn off sched_load_balance in that realtime cpuset.
>>
>> Ah, I don't think 3 is needed. Quite to the contrary, there is quite a
>> large body of
>>> On Tue, Jan 29, 2008 at 6:30 AM, in message
<[EMAIL PROTECTED]>, Paul Jackson <[EMAIL PROTECTED]> wrote:
> Peter wrote, in reply to Peter ;):
>> > [ It looks to me it balances a group over the largest SD the current cpu
>> > has access to, even though that might be larger than the SD associ
>>> On Tue, Jan 29, 2008 at 6:50 AM, in message
<[EMAIL PROTECTED]>, Peter Zijlstra <[EMAIL PROTECTED]>
wrote:
> On Tue, 2008-01-29 at 05:30 -0600, Paul Jackson wrote:
>> Peter wrote, in reply to Peter ;):
>> > > [ It looks to me it balances a group over the largest SD the current cpu
>> > > h
Mark Hansen wrote:
Hello,
Firstly, may I apologise as I am not a member of the LKML, and ask that
I be CC'd in any responses that may be forthcoming.
My question concerns the following patch which was incorporated into the
2.6.22 kernel (quoted from that change log):
Today, all threads wai
Gregory Haskins wrote:
(*) I have no information on whether the futex-plist implemetation was
pulled from the tree to cause your regression. It is possible that the
changes between 22 and 23 are just tickling your environment enough to
bring out this RT-preempt issue.
Hmm...seems I
[EMAIL PROTECTED] wrote:
Hello,
I have some strange behavior in one of my systems.
I have a real-time kernel thread under SCHED_FIFO which is running every
10ms.
It is blocking on a semaphore and released by a timer interrupt every 10ms.
Generally this works really well.
However, there is a mod
>>> On Tue, Jan 15, 2008 at 4:28 AM, in message
<[EMAIL PROTECTED]>, Mike Galbraith <[EMAIL PROTECTED]>
wrote:
> debug resume trace
>
> static inline int pick_optimal_cpu(int this_cpu, cpumask_t *mask)
> {
> int first;
>
> /* "this_cpu" is cheaper to preempt than a remote processor
>>> On Mon, Jan 14, 2008 at 3:27 AM, in message
<[EMAIL PROTECTED]>, Mike Galbraith <[EMAIL PROTECTED]>
wrote:
> On Sun, 2008-01-13 at 15:54 -0500, Steven Rostedt wrote:
>
>> OK, -rt2 will take a bit more beating from me before I release it, so it
>> might take some time to get it out (expect i
.
Regards,
-Greg
-
The baseline code statically builds the span maps when the domain is formed.
Previous attempts at dynamically updating the maps caused a suspend-to-ram
regression, which should now be fixed.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: G
--
sched: dynamically update the root-domain span/online maps
The baseline code statically builds the span maps when the domain is formed.
Previous attempts at dynamically updating the maps caused a suspend-to-ram
regression, which should now be fixed.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
gt;
>> The problem has been identified and a fix patch was provided.
>>
>
> Here we go...
>
> From: Andrew Morton <[EMAIL PROTECTED]>
>
> Revert from git-sched:
>
> commit 9e76ad89f4fa93a789326bc0f4548cd2fbca8d8e
> Author: Gregory Haskin
>>> On Thu, Dec 13, 2007 at 7:06 PM, in message
<[EMAIL PROTECTED]>, Steven Rostedt
<[EMAIL PROTECTED]> wrote:
>
> This is from Gregory Haskins' patch. He forgot to compile check for
> warnings on UP again ;-)
Doh!
>
> Greg,
>
> Can you mer
>>> On Sun, Dec 9, 2007 at 9:53 PM, in message
<[EMAIL PROTECTED]>, Gregory Haskins
<[EMAIL PROTECTED]> wrote:
> + * I have no doubt that this is the proper thing to do to make
> + * sure RT tasks are properly balanced. What I cannot wrap
This patch should button up those conditions.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Dmitry Adamushko <[EMAIL PROTECTED]>
---
kernel/sched.c|8
kernel/sched_rt.c | 46 +-
2 files changed, 53 insertions(+),
Hi Dmitry,
>>> On Sun, Dec 9, 2007 at 12:16 PM, in message
<[EMAIL PROTECTED]>, "Dmitry
Adamushko" <[EMAIL PROTECTED]> wrote:
> [ cc'ed lkml ]
>
> I guess, one possible load-balancing point is out of consideration --
> sched_setscheduler()
> (also rt_mutex_setprio()).
>
> (1) NORMAL --> RT, wh
We had support for overlapping cpuset based rto logic in early prototypes that
is no longer used, so clean it up.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched_rt.c | 32
1 files changed, 0 insertions(+), 32 deletions(-)
diff -
Hi Ingo,
Here are a few more small patches for consideration in sched-devel.
The second patch should be Ack'd by Steven before accepting to make sure I
didn't misunderstand here...but I believe that logic is now defunct since he
moved away from the overlapped cpuset work some time ago.
Regards
getting out of sync.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched_rt.c |8 +---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 4cbde83..53cd9e8 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sche
hat RQ has left the domain.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 05a9a81..02f04bc 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5843,6 +5843,
spans if that RQ has left the domain.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 05a9a81..33f8b0c 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -
>>> On Wed, Dec 5, 2007 at 6:44 AM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> * Gregory Haskins <[EMAIL PROTECTED]> wrote:
>
>> However, that said, Steven's testing work on the mainline port of our
>> series sums
>>> On Wed, Dec 5, 2007 at 4:34 AM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> * Gregory Haskins <[EMAIL PROTECTED]> wrote:
>
>> The current code use a linear algorithm which causes scaling issues on
>> larger SMP
The current code use a linear algorithm which causes scaling issues
on larger SMP machines. This patch replaces that algorithm with a
2-dimensional bitmap to reduce latencies in the wake-up path.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Christoph Lameter <[EMAIL
->cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC:
This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Christoph Lameter <[EMAIL PROTECTED]>
CC: Paul Jackson <[EMAIL PROTECTED]>
CC: Simon Derr <[EMAIL PROTECTED]>
---
include/linu
>>> On Tue, Dec 4, 2007 at 4:27 PM, in message <[EMAIL PROTECTED]>,
Ingo Molnar <[EMAIL PROTECTED]> wrote:
> * Gregory Haskins <[EMAIL PROTECTED]> wrote:
>
>> Ingo,
>>
>> This series applies on GIT commit
>> 2254c2e0184c603f92fc9b8
Ingo Molnar wrote:
> * Gregory Haskins <[EMAIL PROTECTED]> wrote:
>
>> Ingo,
>>
>> This series applies on GIT commit
>> 2254c2e0184c603f92fc9b81016ff4bb53da622d (2.6.24-rc4 (ish) git HEAD)
>
> please post patches against sched-devel.git - it has part of
The current code use a linear algorithm which causes scaling issues
on larger SMP machines. This patch replaces that algorithm with a
2-dimensional bitmap to reduce latencies in the wake-up path.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Christoph Lameter <[EMAIL
This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC: Christoph Lameter <[EMAIL PROTECTED]>
CC: Paul Jackson <[EMAIL PROTECTED]>
CC: Simon Derr <[EMAIL PROTECTED]>
---
include/linu
->cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
CC:
From: Steven Rostedt <[EMAIL PROTECTED]>
Run the RT balancing code on wake up to an RT task.
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
d
eue
is cleared.
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched_rt.c | 49 -
1 files changed, 36 insertions(+), 13 deletions(-)
diff --git a/kernel/sched_rt.c b/kerne
We can cheaply track the number of bits set in the cpumask for the lowest
priority CPUs. Therefore, compute the mask's weight and use it to skip
the optimal domain search logic when there is only one CPU available.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched_
have a hot cache to wake up to. So pushing off a lower
RT task is just killing its cache for no good reason.
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
---
kernel/sched_rt.c | 20
1 files changed, 1
We don't need to bother searching if the task cannot be migrated
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
---
kernel/sched_rt.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/kernel/sc
We have logic to detect whether the system has migratable tasks, but we are
not using it when deciding whether to push tasks away. So we add support
for considering this new information.
Signed-off-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Steven Rostedt <[EMAIL
-by: Gregory Haskins <[EMAIL PROTECTED]>
Signed-off-by: Steven Rostedt <[EMAIL PROTECTED]>
---
kernel/sched.c|1 +
kernel/sched_rt.c | 100 +++--
2 files changed, 89 insertions(+), 12 deletions(-)
diff --git a/kernel/sche
1 - 100 of 292 matches
Mail list logo