On Mon, Jan 14, 2008 at 3:27 AM, in message
[EMAIL PROTECTED], Mike Galbraith [EMAIL PROTECTED]
wrote:
On Sun, 2008-01-13 at 15:54 -0500, Steven Rostedt wrote:
OK, -rt2 will take a bit more beating from me before I release it, so it
might take some time to get it out (expect it out on
Avi Kivity wrote:
Gregory Haskins wrote:
PCI means that you can reuse all of the platform's infrastructure for
irq allocation, discovery, device hotplug, and management.
Its tempting to use, yes. However, most of that infrastructure is
completely inappropriate for a PV implementation
From: Gregory Haskins [EMAIL PROTECTED]
The current wake-up code path tries to determine if it can optimize the
wake-up to this_cpu by computing load calculations. The problem is that
these calculations are only relevant to CFS tasks where load is king. For RT
tasks, priority is king. So
On Sat, Nov 17, 2007 at 1:21 AM, in message
[EMAIL PROTECTED], Steven Rostedt [EMAIL PROTECTED]
wrote:
Gregory Haskins RT balancing broke sched domains.
Doh! (though you mean s/domains/stats ;)
This is a fix to allow it to still work.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED
On Sat, Nov 17, 2007 at 1:33 AM, in message
[EMAIL PROTECTED], Steven Rostedt [EMAIL PROTECTED]
wrote:
Sorry! I forgot to put in a prologue for this patch.
Here it is.
This patch changes the searching for a run queue by a waking RT task
to try to pick another runqueue if the
On Sat, Nov 17, 2007 at 1:33 AM, in message
[EMAIL PROTECTED], Steven Rostedt [EMAIL PROTECTED]
wrote:
- if ((p-prio = rq-rt.highest_prio)
- (p-nr_cpus_allowed 1)) {
+ if (unlikely(rt_task(rq-curr))) {
int cpu = find_lowest_rq(p);
return
(This applies to the end of Steven's v3 series
http://lkml.org/lkml/2007/11/17/10)
--
We don't need to bother searching if the task cannot be migrated
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions
update the root-domain span/online maps
The baseline code statically builds the span maps when the domain is formed.
Previous attempts at dynamically updating the maps caused a suspend-to-ram
regression, which should now be fixed.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Gautham R
.
Regards,
-Greg
-
The baseline code statically builds the span maps when the domain is formed.
Previous attempts at dynamically updating the maps caused a suspend-to-ram
regression, which should now be fixed.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Gautham R
-cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph
The current code use a linear algorithm which causes scaling issues
on larger SMP machines. This patch replaces that algorithm with a
2-dimensional bitmap to reduce latencies in the wake-up path.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED
there may be other regressions as well. We make it easier on people
to select which method they want by making the algorithm a config option,
with the default being the current behavior.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Kconfig.preempt | 31
. This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED]
CC: Paul Jackson [EMAIL PROTECTED]
CC: Simon Derr [EMAIL PROTECTED]
---
include/linux/sched.h |3 +
kernel/sched.c| 121
These patches apply to the end of the rt-balance-patches v6 annouced here:
http://lkml.org/lkml/2007/11/20/613
These replace the v6a patches annouced here:
http://lkml.org/lkml/2007/11/21/226
Changes since v6a:
*) made features tunable via config options
*) fixed a bug related to setting a
/kernel/projects/rt/
or in prebuilt form here from opensuse-factory:
http://download.opensuse.org/distribution/SL-OSS-factory/inst-source/suse/x86_64/kernel-rt-2.6.24_rc3_git1-3.x86_64.rpm
Please consider for inclusion in the next convenient merge window.
Regards,
-Steven Rostedt, and Gregory
From: Steven Rostedt [EMAIL PROTECTED]
This patch adds accounting to keep track of the number of RT tasks running
on a runqueue. This information will be used in later patches.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c
will be used for later patches.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|3 +++
kernel/sched_rt.c | 18 ++
2 files changed, 21 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel
does not address this issue.
Note: checkpatch reveals two over 80 character instances. I'm not sure
that breaking them up will help visually, so I left them as is.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|8
-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 36
1 files changed, 36 insertions(+), 0 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index b5ef4b8..b8c758a 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -3,6 +3,38
holds a RT task that is of higher
prio than the highest prio task on the target runqueue is found it is pulled
to the target runqueue.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|2 +
kernel/sched_rt.c | 187
-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|3 +++
kernel/sched_rt.c | 10 ++
2 files changed, 13 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index a30147e..ebd114b 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -22,6 +22,8
[EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 95 ++---
1 files changed, 4 insertions(+), 91 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index a0b05ff..ea07ffa 100644
--- a/kernel
to scheduling frequency
in the fast path.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
include/linux/init_task.h |1 +
include/linux/sched.h |2 ++
kernel/fork.c |1 +
kernel/sched.c|9 +++-
kernel
this_rq is normally used to denote the RQ on the current cpu
(i.e. cpu_rq(this_cpu)). So clean up the usage of this_rq to be
more consistent with the rest of the code.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched_rt.c | 22
don't want to
modify it)
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
include/linux/sched.h |1
kernel/sched.c | 167 ---
kernel/sched_fair.c | 148
Isolate the search logic into a function so that it can be used later
in places other than find_locked_lowest_rq().
(Checkpatch error is inherited from moved code)
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched_rt.c | 66
It doesn't hurt if we allow the current CPU to be included in the
search. We will just simply skip it later if the current CPU turns out
to be the lowest.
We will use this later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED
inaccuracies caused by a condition
of priority mistargeting caused by the lightweight lookup. Most of the
time, the pre-routing should work and yield lower overhead. In the cases
where it doesnt, the post-router will bat cleanup.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off
-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c|1 +
kernel/sched_rt.c | 100 +++--
2 files changed, 89 insertions(+), 12 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
We have logic to detect whether the system has migratable tasks, but we are
not using it when deciding whether to push tasks away. So we add support
for considering this new information.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel
cache to wake up to. So pushing off a lower
RT task is just killing its cache for no good reason.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 20
1 files changed, 16 insertions(+), 4 deletions
We don't need to bother searching if the task cannot be migrated
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched_rt.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel
We can cheaply track the number of bits set in the cpumask for the lowest
priority CPUs. Therefore, compute the mask's weight and use it to skip
the optimal domain search logic when there is only one CPU available.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 25
is cleared.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 49 -
1 files changed, 36 insertions(+), 13 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 0514b27
From: Steven Rostedt [EMAIL PROTECTED]
Run the RT balancing code on wake up to an RT task.
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c
. This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED]
CC: Paul Jackson [EMAIL PROTECTED]
CC: Simon Derr [EMAIL PROTECTED]
---
include/linux/sched.h |3 +
kernel/sched.c| 121
-cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph
The current code use a linear algorithm which causes scaling issues
on larger SMP machines. This patch replaces that algorithm with a
2-dimensional bitmap to reduce latencies in the wake-up path.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED
Ingo Molnar wrote:
* Gregory Haskins [EMAIL PROTECTED] wrote:
Ingo,
This series applies on GIT commit
2254c2e0184c603f92fc9b81016ff4bb53da622d (2.6.24-rc4 (ish) git HEAD)
please post patches against sched-devel.git - it has part of your
previous patches included already, plus some
The current code use a linear algorithm which causes scaling issues
on larger SMP machines. This patch replaces that algorithm with a
2-dimensional bitmap to reduce latencies in the wake-up path.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED
-cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph
On Tue, Dec 4, 2007 at 4:27 PM, in message [EMAIL PROTECTED],
Ingo Molnar [EMAIL PROTECTED] wrote:
* Gregory Haskins [EMAIL PROTECTED] wrote:
Ingo,
This series applies on GIT commit
2254c2e0184c603f92fc9b81016ff4bb53da622d (2.6.24-rc4 (ish) git HEAD)
please post patches against
. This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED]
CC: Paul Jackson [EMAIL PROTECTED]
CC: Simon Derr [EMAIL PROTECTED]
---
include/linux/sched.h |3 +
kernel/sched.c| 121
On Wed, Dec 5, 2007 at 6:44 AM, in message [EMAIL PROTECTED],
Ingo Molnar [EMAIL PROTECTED] wrote:
* Gregory Haskins [EMAIL PROTECTED] wrote:
However, that said, Steven's testing work on the mainline port of our
series sums it up very nicely, so I will present that in lieu of
digging
in the spans if that RQ has left the domain.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |4
1 files changed, 4 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 05a9a81..33f8b0c 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5845,6
the domain.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 05a9a81..02f04bc 100644
--- a/kernel/sched.c
+++ b/kernel/sched.c
@@ -5843,6 +5843,9 @@ static void
On Wed, Dec 5, 2007 at 4:34 AM, in message [EMAIL PROTECTED],
Ingo Molnar [EMAIL PROTECTED] wrote:
* Gregory Haskins [EMAIL PROTECTED] wrote:
The current code use a linear algorithm which causes scaling issues on
larger SMP machines. This patch replaces that algorithm with a
2
Hi Ingo,
Here are a few more small patches for consideration in sched-devel.
The second patch should be Ack'd by Steven before accepting to make sure I
didn't misunderstand here...but I believe that logic is now defunct since he
moved away from the overlapped cpuset work some time ago.
getting out of sync.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c |8 +---
1 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 4cbde83..53cd9e8 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -34,9
We had support for overlapping cpuset based rto logic in early prototypes that
is no longer used, so clean it up.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 32
1 files changed, 0 insertions(+), 32 deletions(-)
diff --git
I spied a few more issues from http://lkml.org/lkml/2007/11/20/590.
Patch is below..
Regards,
-Greg
-
Include cpu 0 in the search, and eliminate the redundant cpu_set since
the bit should already be set in the mask.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel
These are patches that apply to the end of the v5 series annouced here:
http://lkml.org/lkml/2007/11/20/558
Steven,
These are patches that I could not finish in time to get in with the v4
release.
Ingo,
If you accept the prior work submitted by Steven and myself, please also
Include cpu 0 in the search, and eliminate the redundant cpu_set since
the bit should already be set in the mask.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c |7 +++
1 files changed, 3 insertions(+), 4 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel
. This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED]
CC: Paul Jackson [EMAIL PROTECTED]
CC: Simon Derr [EMAIL PROTECTED]
---
include/linux/sched.h |3 ++
kernel/sched.c| 89
-cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph
raised by Steve Rostedt et. al.
Therefore, I include this patch in the hopes that it is useful to
someone, but with the understanding that it is not likely to be accepted
without further demonstration of its benefits.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL
On Tue, Nov 20, 2007 at 11:26 PM, in message
[EMAIL PROTECTED], Steven Rostedt [EMAIL PROTECTED]
wrote:
On Tue, Nov 20, 2007 at 11:15:48PM -0500, Steven Rostedt wrote:
Gregory Haskins wrote:
I spied a few more issues from http://lkml.org/lkml/2007/11/20/590.
Patch is below..
Thanks, but I
These patches apply to the end of the rt-balance-patches v6 annouced here:
http://lkml.org/lkml/2007/11/20/613
Changes since v1:
*) Rebased from v4 to v6
*) original patch #1 was folded into v6
*) added support for caching online cpus within the root-domain
Comments welcome!
Regards,
-Greg
-
. This logic doesn't have any clients
yet but it will later in the series.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL PROTECTED]
CC: Paul Jackson [EMAIL PROTECTED]
CC: Simon Derr [EMAIL PROTECTED]
---
include/linux/sched.h |3 ++
kernel/sched.c| 89
We cache the subset of rd-span cpus that are online in rd-online to
reduce the need for runtime computation.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 22 ++
1 files changed, 22 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel
-cpus_allowed will effectively reduce our search
to within our domain. However, I believe there are cases where the
cpus_allowed mask may be all ones and therefore we err on the side of
caution. If it can be optimized later, so be it.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph
raised by Steve Rostedt et. al.
Therefore, I include this patch in the hopes that it is useful to
someone, but with the understanding that it is not likely to be accepted
without further demonstration of its benefits.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Christoph Lameter [EMAIL
Hi Dmitry,
On Sun, Dec 9, 2007 at 12:16 PM, in message
[EMAIL PROTECTED], Dmitry
Adamushko [EMAIL PROTECTED] wrote:
[ cc'ed lkml ]
I guess, one possible load-balancing point is out of consideration --
sched_setscheduler()
(also rt_mutex_setprio()).
(1) NORMAL -- RT, when p-se.on_rq ==
This patch should button up those conditions.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
CC: Dmitry Adamushko [EMAIL PROTECTED]
---
kernel/sched.c|8
kernel/sched_rt.c | 46 +-
2 files changed, 53 insertions(+), 1 deletions
On Sun, Dec 9, 2007 at 9:53 PM, in message
[EMAIL PROTECTED], Gregory Haskins
[EMAIL PROTECTED] wrote:
+ * I have no doubt that this is the proper thing to do to make
+ * sure RT tasks are properly balanced. What I cannot wrap my
+ * head around
On Thu, Dec 13, 2007 at 7:06 PM, in message
[EMAIL PROTECTED], Steven Rostedt
[EMAIL PROTECTED] wrote:
This is from Gregory Haskins' patch. He forgot to compile check for
warnings on UP again ;-)
Doh!
Greg,
Can you merge the first part into your patch and resend it to me.
Sure
: Gregory Haskins [EMAIL PROTECTED]
Date: Thu Dec 13 21:35:26 2007 +0100
sched: update root-domain spans upon departure
We shouldnt leave cpus enabled in the spans if that RQ has left the
domain.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Ingo
This is a mini-release of my series, rebased on -rt2. I have more changes
downstream which are not quite ready for primetime, but I need to work on some
other unrelated issues right now and I wanted to get what works out there.
Changes since v5
*) Rebased to rt2 - Many of the functions of the
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 10 ++
1 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 55da7d0..b59dc20 100644
--- a/kernel/sched_rt.c
+++ b/kernel/sched_rt.c
@@ -292,7 +292,6 @@ static
is probably relatively expensive, so it is only
done when the cpus_allowed mask is updated (which should be relatively
infrequent, especially compared to scheduling frequency) and cached in
the task_struct.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/sched.h |2 +
kernel
with affinity restrictions, the algorithm has a
worst case complexity of O(min(102, NR_CPUS)), though the scenario that
yields the worst case search is fairly contrived.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/Makefile |2
kernel/sched.c|4 +
kernel
Oh crap. I just realized this is an older version of the patch..mustv'e
forgot to refresh...grr. Ill send out the refreshed one.
But anyway, I digress.
On Thu, 2007-10-25 at 11:27 -0400, Steven Rostedt wrote:
The cpu_priority and the cp-lock will be aboslutely horrible for
cacheline
a suitable CPU with O(1) complexity (e.g. two bit
searches). For tasks with affinity restrictions, the algorithm has a
worst case complexity of O(min(102, NR_CPUS)), though the scenario that
yields the worst case search is fairly contrived.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED
On Thu, 2007-10-25 at 13:36 -0400, Steven Rostedt wrote:
Yep, -rt2 (and -rt3) are both horrible too. That's why I'm working on a
sched-domain version now to handle that.
Excellent. I'm not 100% sure I've got the mingo lingo ;) down enough
to know if sched_domains are the best fit, but I
On Thu, 2007-10-25 at 11:48 -0400, Steven Rostedt wrote:
--
On Thu, 25 Oct 2007, Gregory Haskins wrote:
Some RT tasks (particularly kthreads) are bound to one specific CPU.
It is fairly common for one or more bound tasks to get queued up at the
same time. Consider, for instance
On Thu, 2007-10-25 at 15:52 -0400, Steven Rostedt wrote:
+ p-sched_class-set_cpus_allowed(p,
new_mask); +else {
+ p-cpus_allowed= new_mask;
+ p-nr_cpus_allowed = cpus_weight(new_mask);
+ }
+
/* Can
Steven Rostedt [EMAIL PROTECTED] 10/25/07 8:03 PM
Why do you think moving the logic to pick_next_highest is a better
design? To be honest, I haven't really studied your new logic in
push_rt_tasks to understand why you might feel this way. If you can
make the case that it is better in the
compared to scheduling frequency) and cached in
the task_struct.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/sched.h |2 ++
kernel/fork.c |1 +
kernel/sched.c|9 +++-
kernel/sched_rt.c | 58
On Fri, 2007-10-26 at 10:47 -0400, Steven Rostedt wrote:
--
On Fri, 26 Oct 2007, Gregory Haskins wrote:
This version has the feedback from Steve's review incorporated
-
RT: Cache cpus_allowed weight for optimizing migration
Some RT tasks
to scheduling frequency
in the fast path.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
include/linux/sched.h |2 ++
kernel/fork.c |1 +
kernel/sched.c|9 +++-
kernel/sched_rt.c | 58 +
4 files changed, 64
Please fold into original -rt2 patches as appropriate
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched_rt.c | 10 ++
1 files changed, 2 insertions(+), 8 deletions(-)
diff --git a/kernel/sched_rt.c b/kernel/sched_rt.c
index 55da7d0..b59dc20 100644
--- a/kernel
Peter Zijlstra wrote:
But please, people who want this (I'm sure some of you are reading) do
speak up. I'm just the motivated corporate drone implementing the
feature :-)
FWIW, I could have used a swap to network technology X like system at
my last job. We were building a large networking
Greg KH [EMAIL PROTECTED] 10/31/07 10:37 AM
It does not apply to 2.6.22 at all, so unless someone sends us a
backported version, I'll not apply it there.
Ill take care of this for you, Greg.
-Greg
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
. ]
[ mingo: this does not impact the correctness of validation, but may slow
down future operations significantly, if the chain gets very long. ]
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
Signed-off-by: Peter Zijlstra [EMAIL PROTECTED]
Signed-off-by: Ingo Molnar [EMAIL PROTECTED
Applies to 23-rt1 + Steve's latest push_rt patch
Changes since v3:
1) Rebased to Steve's latest
2) Added a highest_prio feature to eliminate a race w.r.t. activating a task
and the time it takes to actually reschedule the RQ.
3) Dropped the PI patch, because the highest_prio patch obsoletes
From: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c| 141 ++---
kernel/sched_rt.c | 44 +
2 files changed, 178 insertions(+), 7 deletions(-)
diff --git a/kernel/sched.c
The system currently evaluates all online CPUs whenever one or more enters
an rt_overload condition. This suffers from scalability limitations as
the # of online CPUs increases. So we introduce a cpumask to track
exactly which CPUs need RT balancing.
Signed-off-by: Gregory Haskins [EMAIL
A little cleanup to avoid #ifdef proliferation later in the series
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 16 +---
1 files changed, 13 insertions(+), 3 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 0da8c30..131f618 100644
We should init the base value of the current RQ priority to IDLE
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 131f618..d68f600 100644
--- a/kernel/sched.c
+++ b
This is an implementation of Steve's idea where we should update the RQ
concept of priority to show the highest-task, even if that task is not (yet)
running. This prevents us from pushing multiple tasks to the RQ before it
gets a chance to reschedule.
Signed-off-by: Gregory Haskins [EMAIL
Get rid of the superfluous dst_cpu, and move the cpu_mask inside the search
function.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 18 +++---
1 files changed, 7 insertions(+), 11 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
index 67034aa
) One or more CPUs are in overload, AND
2) We are about to switch to a task that lowers our priority.
(3) will be addressed in a later patch.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c | 88 ++--
1 files changed, 41
From: Steven Rostedt [EMAIL PROTECTED]
Steve found these errors in the original patch
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|2 +-
kernel/sched_rt.c |2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched.c b/kernel/sched.c
We can avoid dirtying a rq related cacheline with a simple check, so why not.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
0 files changed, 0 insertions(+), 0 deletions(-)
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL
Oops...forgot to refresh this patch before mailing it. Heres the actual
patch.
We can avoid dirtying a rq related cacheline with a simple check, so why not.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c |3 ++-
1 files changed, 2 insertions(+), 1 deletions
On Fri, 2007-10-19 at 14:42 -0400, Steven Rostedt wrote:
plain text document attachment (add-rq-highest-prio.patch)
This patch adds accounting to each runqueue to keep track of the
highest prio task queued on the run queue. We only care about
RT tasks, so if the run queue does not contain any
On Sat, 2007-10-20 at 04:48 +0200, Roel Kluin wrote:
Gregory Haskins wrote:
We can avoid dirtying a rq related cacheline with a simple check, so why
not.
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
0 files changed, 0 insertions(+), 0 deletions(-)
I think you wanted
This is version 5 of the patch series against 23-rt1.
There have been numerous fixes/tweaks since v4, though we still are based on
the global rto_cpumask logic instead of Steve/Ingo's cpuset logic. Otherwise,
it's in pretty good shape.
Without the series applied, the following test will fail:
From: Steven Rostedt [EMAIL PROTECTED]
Signed-off-by: Steven Rostedt [EMAIL PROTECTED]
---
kernel/sched.c| 141 ++---
kernel/sched_rt.c | 44 +
2 files changed, 178 insertions(+), 7 deletions(-)
diff --git a/kernel/sched.c
We inadvertently added a redundant function, so clean it up
Signed-off-by: Gregory Haskins [EMAIL PROTECTED]
---
kernel/sched.c|9 +
kernel/sched_rt.c | 44
2 files changed, 5 insertions(+), 48 deletions(-)
diff --git a/kernel
1 - 100 of 583 matches
Mail list logo