Ingo Molnar wrote:
* Davide Libenzi <[EMAIL PROTECTED]> wrote:
The same user nicing two different multi-threaded processes would
expect a predictable CPU distribution too. [...]
i disagree that the user 'would expect' this. Some users might. Others
would say: 'my 10-thread rendering engine
Linus Torvalds wrote:
On Wed, 18 Apr 2007, Matt Mackall wrote:
Why is X special? Because it does work on behalf of other processes?
Lots of things do this. Perhaps a scheduler should focus entirely on
the implicit and directed wakeup matrix and optimizing that
instead[1].
I 100% agree - the
Matt Mackall wrote:
On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
[2] It's trivial to construct two or more perfectly reasonable and
desirable definitions of fairness that are mutually incompatible.
Probably not if you use common sense, and in the context of a replacement
for
Hi Björn,
On Sat, Apr 21, 2007 at 01:29:41PM +0200, Björn Steinbrink wrote:
> Hi,
>
> On 2007.04.21 13:07:48 +0200, Willy Tarreau wrote:
> > > another thing i noticed: when using a -y larger then 1, then the window
> > > title (at least on Metacity) overlaps and thus the ocbench tasks have
> >
Hi,
On 2007.04.21 13:07:48 +0200, Willy Tarreau wrote:
> > another thing i noticed: when using a -y larger then 1, then the window
> > title (at least on Metacity) overlaps and thus the ocbench tasks have
> > different X overhead and get scheduled a bit assymetrically as well. Is
> > there any
Hi Ingo,
I'm replying to your 3 mails at once.
On Sat, Apr 21, 2007 at 12:45:22PM +0200, Ingo Molnar wrote:
>
> * Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> > > It could become a useful scheduler benchmark !
> >
> > i just tried ocbench-0.3, and it is indeed very nice!
So as you've noticed
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > It could become a useful scheduler benchmark !
>
> i just tried ocbench-0.3, and it is indeed very nice!
another thing i noticed: when using a -y larger then 1, then the window
title (at least on Metacity) overlaps and thus the ocbench tasks have
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > The modified code is here :
> >
> > http://linux.1wt.eu/sched/orbitclock-0.2bench.tgz
> >
> > What is interesting to note is that it's easy to make X work a lot
> > (99%) by using 0 as the sleeping time, and it's easy to make the
> > process
* Willy Tarreau <[EMAIL PROTECTED]> wrote:
> I hacked it a bit to make it accept two parameters :
> -R : time spent burning CPU cycles at each round
> -S : time spent getting a rest
>
> It now advances what it thinks is a second at each iteration, so that
> it makes it easy to compare
* Bill Davidsen <[EMAIL PROTECTED]> wrote:
> All of my testing has been on desktop machines, although in most cases
> they were really loaded desktops which had load avg 10..100 from time
> to time, and none were low memory machines. Up to CFS v3 I thought
> nicksched was my winner, now CFSv3
On Fri, Apr 20, 2007 at 04:47:27PM -0400, Bill Davidsen wrote:
> Ingo Molnar wrote:
>
> >( Lets be cautious though: the jury is still out whether people actually
> > like this more than the current approach. While CFS feedback looks
> > promising after a whopping 3 days of it being released [
On Fri, Apr 20, 2007 at 04:47:27PM -0400, Bill Davidsen wrote:
Ingo Molnar wrote:
( Lets be cautious though: the jury is still out whether people actually
like this more than the current approach. While CFS feedback looks
promising after a whopping 3 days of it being released [ ;-) ],
* Bill Davidsen [EMAIL PROTECTED] wrote:
All of my testing has been on desktop machines, although in most cases
they were really loaded desktops which had load avg 10..100 from time
to time, and none were low memory machines. Up to CFS v3 I thought
nicksched was my winner, now CFSv3 looks
* Ingo Molnar [EMAIL PROTECTED] wrote:
The modified code is here :
http://linux.1wt.eu/sched/orbitclock-0.2bench.tgz
What is interesting to note is that it's easy to make X work a lot
(99%) by using 0 as the sleeping time, and it's easy to make the
process work a lot by using
* Willy Tarreau [EMAIL PROTECTED] wrote:
I hacked it a bit to make it accept two parameters :
-R run_time_in_microsecond : time spent burning CPU cycles at each round
-S sleep_time_in_microsecond : time spent getting a rest
It now advances what it thinks is a second at each iteration,
* Ingo Molnar [EMAIL PROTECTED] wrote:
It could become a useful scheduler benchmark !
i just tried ocbench-0.3, and it is indeed very nice!
another thing i noticed: when using a -y larger then 1, then the window
title (at least on Metacity) overlaps and thus the ocbench tasks have
Hi Ingo,
I'm replying to your 3 mails at once.
On Sat, Apr 21, 2007 at 12:45:22PM +0200, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
It could become a useful scheduler benchmark !
i just tried ocbench-0.3, and it is indeed very nice!
So as you've noticed just one
Hi,
On 2007.04.21 13:07:48 +0200, Willy Tarreau wrote:
another thing i noticed: when using a -y larger then 1, then the window
title (at least on Metacity) overlaps and thus the ocbench tasks have
different X overhead and get scheduled a bit assymetrically as well. Is
there any way to
Hi Björn,
On Sat, Apr 21, 2007 at 01:29:41PM +0200, Björn Steinbrink wrote:
Hi,
On 2007.04.21 13:07:48 +0200, Willy Tarreau wrote:
another thing i noticed: when using a -y larger then 1, then the window
title (at least on Metacity) overlaps and thus the ocbench tasks have
different
Matt Mackall wrote:
On Wed, Apr 18, 2007 at 08:37:11AM +0200, Nick Piggin wrote:
[2] It's trivial to construct two or more perfectly reasonable and
desirable definitions of fairness that are mutually incompatible.
Probably not if you use common sense, and in the context of a replacement
for
Linus Torvalds wrote:
On Wed, 18 Apr 2007, Matt Mackall wrote:
Why is X special? Because it does work on behalf of other processes?
Lots of things do this. Perhaps a scheduler should focus entirely on
the implicit and directed wakeup matrix and optimizing that
instead[1].
I 100% agree - the
Ingo Molnar wrote:
* Davide Libenzi [EMAIL PROTECTED] wrote:
The same user nicing two different multi-threaded processes would
expect a predictable CPU distribution too. [...]
i disagree that the user 'would expect' this. Some users might. Others
would say: 'my 10-thread rendering engine
Ingo Molnar wrote:
( Lets be cautious though: the jury is still out whether people actually
like this more than the current approach. While CFS feedback looks
promising after a whopping 3 days of it being released [ ;-) ], the
test coverage of all 'fairness centric' schedulers, even
Mike Galbraith wrote:
On Tue, 2007-04-17 at 05:40 +0200, Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:29:01AM +0200, Mike Galbraith wrote:
Yup, and progress _is_ happening now, quite rapidly.
Progress as in progress on Ingo's scheduler. I still don't know how we'd
decide when to replace
William Lee Irwin III wrote:
William Lee Irwin III wrote:
I'd further recommend making priority levels accessible to kernel threads
that are not otherwise accessible to processes, both above and below
user-available priority levels. Basically, if you can get SCHED_RR and
SCHED_FIFO to coexist
William Lee Irwin III wrote:
William Lee Irwin III wrote:
I'd further recommend making priority levels accessible to kernel threads
that are not otherwise accessible to processes, both above and below
user-available priority levels. Basically, if you can get SCHED_RR and
SCHED_FIFO to coexist
Mike Galbraith wrote:
On Tue, 2007-04-17 at 05:40 +0200, Nick Piggin wrote:
On Tue, Apr 17, 2007 at 04:29:01AM +0200, Mike Galbraith wrote:
Yup, and progress _is_ happening now, quite rapidly.
Progress as in progress on Ingo's scheduler. I still don't know how we'd
decide when to replace
Ingo Molnar wrote:
( Lets be cautious though: the jury is still out whether people actually
like this more than the current approach. While CFS feedback looks
promising after a whopping 3 days of it being released [ ;-) ], the
test coverage of all 'fairness centric' schedulers, even
William Lee Irwin III wrote:
>> I'd further recommend making priority levels accessible to kernel threads
>> that are not otherwise accessible to processes, both above and below
>> user-available priority levels. Basically, if you can get SCHED_RR and
>> SCHED_FIFO to coexist as "intimate
On Thu, 2007-04-19 at 09:55 -0700, Davide Libenzi wrote:
> On Thu, 19 Apr 2007, Mike Galbraith wrote:
>
> > On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
> > > * Mike Galbraith <[EMAIL PROTECTED]> wrote:
> > >
> > > > With a heavily reniced X (perfectly fine), that should indeed solve my
On Fri, Apr 20, 2007 at 02:52:38AM +0300, Jan Knutar wrote:
> On Thursday 19 April 2007 18:18, Ingo Molnar wrote:
> > * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> > > You can certainly script it with -geometry. But it is the wrong
> > > application for this matter, because you benchmark X more
On Thursday 19 April 2007 18:18, Ingo Molnar wrote:
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
> > You can certainly script it with -geometry. But it is the wrong
> > application for this matter, because you benchmark X more than
> > glxgears itself. What would be better is something like a line
On Thu, Apr 19, 2007 at 05:18:03PM +0200, Ingo Molnar wrote:
>
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
>
> > You can certainly script it with -geometry. But it is the wrong
> > application for this matter, because you benchmark X more than
> > glxgears itself. What would be better is
In article <[EMAIL PROTECTED]> you wrote:
> Top (VCPU maybe?)
>User
>Process
>Thread
The problem with that is, that not all Schedulers might work on the User
level. You can think of Batch/Job, Parent, Group, Session or namespace
level. That would be IMHO a generic Top,
On Thursday 19 April 2007, Ingo Molnar wrote:
>* Willy Tarreau <[EMAIL PROTECTED]> wrote:
>> You can certainly script it with -geometry. But it is the wrong
>> application for this matter, because you benchmark X more than
>> glxgears itself. What would be better is something like a line
>>
On Thursday 19 April 2007, Ingo Molnar wrote:
>* Willy Tarreau <[EMAIL PROTECTED]> wrote:
>> Good idea. The machine I'm typing from now has 1000 scheddos running
>> at +19, and 12 gears at nice 0. [...]
>>
>> From time to time, one of the 12 aligned gears will quickly perform a
>> full quarter of
On Thu, 19 Apr 2007, Mike Galbraith wrote:
> On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
> > * Mike Galbraith <[EMAIL PROTECTED]> wrote:
> >
> > > With a heavily reniced X (perfectly fine), that should indeed solve my
> > > daily usage pattern nicely (always need godmode for shells,
On Thu, 19 Apr 2007, Ingo Molnar wrote:
> i disagree that the user 'would expect' this. Some users might. Others
> would say: 'my 10-thread rendering engine is more important than a
> 1-thread job because it's using 10 threads for a reason'. And the CFS
> feedback so far strengthens this
* Willy Tarreau <[EMAIL PROTECTED]> wrote:
> You can certainly script it with -geometry. But it is the wrong
> application for this matter, because you benchmark X more than
> glxgears itself. What would be better is something like a line
> rotating 360 degrees and doing some short stuff
Hi Ingo,
On Thu, Apr 19, 2007 at 11:01:44AM +0200, Ingo Molnar wrote:
>
> * Willy Tarreau <[EMAIL PROTECTED]> wrote:
>
> > Good idea. The machine I'm typing from now has 1000 scheddos running
> > at +19, and 12 gears at nice 0. [...]
>
> > From time to time, one of the 12 aligned gears will
William Lee Irwin III wrote:
* Andrew Morton <[EMAIL PROTECTED]> wrote:
Yes, there are potential compatibility problems. Example: a machine
with 100 busy httpd processes and suddenly a big gzip starts up from
console or cron.
[...]
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> > I think a better approach would be to keep track of the rightmost
> > entry, set the key to the rightmost's key +1 and then simply insert
> > it there.
>
> yeah. I had that implemented at a stage but was trying to be too
> clever for my own good
* Esben Nielsen <[EMAIL PROTECTED]> wrote:
> >+/*
> >+ * Temporarily insert at the last position of the tree:
> >+ */
> >+p->fair_key = LLONG_MAX;
> >+__enqueue_task_fair(rq, p);
> > p->on_rq = 1;
> >+
> >+/*
> >+ * Update the key to the real value, so that when
On Wed, 18 Apr 2007, Ingo Molnar wrote:
* Christian Hesse <[EMAIL PROTECTED]> wrote:
Hi Ingo and all,
On Friday 13 April 2007, Ingo Molnar wrote:
as usual, any sort of feedback, bugreports, fixes and suggestions are
more than welcome,
I just gave CFS a try on my system. From a user's
* Willy Tarreau <[EMAIL PROTECTED]> wrote:
> Good idea. The machine I'm typing from now has 1000 scheddos running
> at +19, and 12 gears at nice 0. [...]
> From time to time, one of the 12 aligned gears will quickly perform a
> full quarter of round while others slowly turn by a few degrees.
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > > And yes, by fairly, I mean fairly among all threads as a base
> > > resource class, because that's what Linux has always done
> >
> > Yes, there are potential compatibility
* Davide Libenzi <[EMAIL PROTECTED]> wrote:
> > That's one reason why i dont think it's necessarily a good idea to
> > group-schedule threads, we dont really want to do a per thread group
> > percpu_alloc().
>
> I still do not have clear how much overhead this will bring into the
> table,
* Andrew Morton <[EMAIL PROTECTED]> wrote:
>> Yes, there are potential compatibility problems. Example: a machine
>> with 100 busy httpd processes and suddenly a big gzip starts up from
>> console or cron.
[...]
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
> h. How about
On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
> * Mike Galbraith <[EMAIL PROTECTED]> wrote:
>
> > With a heavily reniced X (perfectly fine), that should indeed solve my
> > daily usage pattern nicely (always need godmode for shells, but not
> > for mozilla and ilk. 50/50 split automatic
On Thu, 2007-04-19 at 08:52 +0200, Mike Galbraith wrote:
> On Wed, 2007-04-18 at 23:48 +0200, Ingo Molnar wrote:
>
> > so my current impression is that we want per UID accounting to solve the
> > X problem, the kernel threads problem and the many-users problem, but
> > i'd not want to do it for
* Mike Galbraith <[EMAIL PROTECTED]> wrote:
> With a heavily reniced X (perfectly fine), that should indeed solve my
> daily usage pattern nicely (always need godmode for shells, but not
> for mozilla and ilk. 50/50 split automatic without renice of entire
> gui)
how about the
On Wed, 2007-04-18 at 23:48 +0200, Ingo Molnar wrote:
> so my current impression is that we want per UID accounting to solve the
> X problem, the kernel threads problem and the many-users problem, but
> i'd not want to do it for threads just yet because for them there's not
> really any
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > And yes, by fairly, I mean fairly among all threads as a base
> > resource class, because that's what Linux has always done
>
> Yes, there are potential compatibility problems. Example: a machine
> with 100 busy httpd processes and suddenly a
* Andrew Morton [EMAIL PROTECTED] wrote:
And yes, by fairly, I mean fairly among all threads as a base
resource class, because that's what Linux has always done
Yes, there are potential compatibility problems. Example: a machine
with 100 busy httpd processes and suddenly a big gzip
On Wed, 2007-04-18 at 23:48 +0200, Ingo Molnar wrote:
so my current impression is that we want per UID accounting to solve the
X problem, the kernel threads problem and the many-users problem, but
i'd not want to do it for threads just yet because for them there's not
really any apparent
* Mike Galbraith [EMAIL PROTECTED] wrote:
With a heavily reniced X (perfectly fine), that should indeed solve my
daily usage pattern nicely (always need godmode for shells, but not
for mozilla and ilk. 50/50 split automatic without renice of entire
gui)
how about the first-approximation
On Thu, 2007-04-19 at 08:52 +0200, Mike Galbraith wrote:
On Wed, 2007-04-18 at 23:48 +0200, Ingo Molnar wrote:
so my current impression is that we want per UID accounting to solve the
X problem, the kernel threads problem and the many-users problem, but
i'd not want to do it for threads
On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
* Mike Galbraith [EMAIL PROTECTED] wrote:
With a heavily reniced X (perfectly fine), that should indeed solve my
daily usage pattern nicely (always need godmode for shells, but not
for mozilla and ilk. 50/50 split automatic without
* Andrew Morton [EMAIL PROTECTED] wrote:
Yes, there are potential compatibility problems. Example: a machine
with 100 busy httpd processes and suddenly a big gzip starts up from
console or cron.
[...]
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
h. How about the
* Davide Libenzi [EMAIL PROTECTED] wrote:
That's one reason why i dont think it's necessarily a good idea to
group-schedule threads, we dont really want to do a per thread group
percpu_alloc().
I still do not have clear how much overhead this will bring into the
table, but I think
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
* Andrew Morton [EMAIL PROTECTED] wrote:
And yes, by fairly, I mean fairly among all threads as a base
resource class, because that's what Linux has always done
Yes, there are potential compatibility problems. Example:
* Willy Tarreau [EMAIL PROTECTED] wrote:
Good idea. The machine I'm typing from now has 1000 scheddos running
at +19, and 12 gears at nice 0. [...]
From time to time, one of the 12 aligned gears will quickly perform a
full quarter of round while others slowly turn by a few degrees. In
On Wed, 18 Apr 2007, Ingo Molnar wrote:
* Christian Hesse [EMAIL PROTECTED] wrote:
Hi Ingo and all,
On Friday 13 April 2007, Ingo Molnar wrote:
as usual, any sort of feedback, bugreports, fixes and suggestions are
more than welcome,
I just gave CFS a try on my system. From a user's
* Esben Nielsen [EMAIL PROTECTED] wrote:
+/*
+ * Temporarily insert at the last position of the tree:
+ */
+p-fair_key = LLONG_MAX;
+__enqueue_task_fair(rq, p);
p-on_rq = 1;
+
+/*
+ * Update the key to the real value, so that when all other
+ *
* Ingo Molnar [EMAIL PROTECTED] wrote:
I think a better approach would be to keep track of the rightmost
entry, set the key to the rightmost's key +1 and then simply insert
it there.
yeah. I had that implemented at a stage but was trying to be too
clever for my own good ;-)
i have
William Lee Irwin III wrote:
* Andrew Morton [EMAIL PROTECTED] wrote:
Yes, there are potential compatibility problems. Example: a machine
with 100 busy httpd processes and suddenly a big gzip starts up from
console or cron.
[...]
On Thu, Apr 19, 2007 at 08:38:10AM +0200, Ingo Molnar wrote:
Hi Ingo,
On Thu, Apr 19, 2007 at 11:01:44AM +0200, Ingo Molnar wrote:
* Willy Tarreau [EMAIL PROTECTED] wrote:
Good idea. The machine I'm typing from now has 1000 scheddos running
at +19, and 12 gears at nice 0. [...]
From time to time, one of the 12 aligned gears will quickly
* Willy Tarreau [EMAIL PROTECTED] wrote:
You can certainly script it with -geometry. But it is the wrong
application for this matter, because you benchmark X more than
glxgears itself. What would be better is something like a line
rotating 360 degrees and doing some short stuff between
On Thu, 19 Apr 2007, Ingo Molnar wrote:
i disagree that the user 'would expect' this. Some users might. Others
would say: 'my 10-thread rendering engine is more important than a
1-thread job because it's using 10 threads for a reason'. And the CFS
feedback so far strengthens this point:
On Thu, 19 Apr 2007, Mike Galbraith wrote:
On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
* Mike Galbraith [EMAIL PROTECTED] wrote:
With a heavily reniced X (perfectly fine), that should indeed solve my
daily usage pattern nicely (always need godmode for shells, but not
On Thursday 19 April 2007, Ingo Molnar wrote:
* Willy Tarreau [EMAIL PROTECTED] wrote:
Good idea. The machine I'm typing from now has 1000 scheddos running
at +19, and 12 gears at nice 0. [...]
From time to time, one of the 12 aligned gears will quickly perform a
full quarter of round while
On Thursday 19 April 2007, Ingo Molnar wrote:
* Willy Tarreau [EMAIL PROTECTED] wrote:
You can certainly script it with -geometry. But it is the wrong
application for this matter, because you benchmark X more than
glxgears itself. What would be better is something like a line
rotating 360
In article [EMAIL PROTECTED] you wrote:
Top (VCPU maybe?)
User
Process
Thread
The problem with that is, that not all Schedulers might work on the User
level. You can think of Batch/Job, Parent, Group, Session or namespace
level. That would be IMHO a generic Top, with no
On Thu, Apr 19, 2007 at 05:18:03PM +0200, Ingo Molnar wrote:
* Willy Tarreau [EMAIL PROTECTED] wrote:
You can certainly script it with -geometry. But it is the wrong
application for this matter, because you benchmark X more than
glxgears itself. What would be better is something like
On Thursday 19 April 2007 18:18, Ingo Molnar wrote:
* Willy Tarreau [EMAIL PROTECTED] wrote:
You can certainly script it with -geometry. But it is the wrong
application for this matter, because you benchmark X more than
glxgears itself. What would be better is something like a line
On Fri, Apr 20, 2007 at 02:52:38AM +0300, Jan Knutar wrote:
On Thursday 19 April 2007 18:18, Ingo Molnar wrote:
* Willy Tarreau [EMAIL PROTECTED] wrote:
You can certainly script it with -geometry. But it is the wrong
application for this matter, because you benchmark X more than
On Thu, 2007-04-19 at 09:55 -0700, Davide Libenzi wrote:
On Thu, 19 Apr 2007, Mike Galbraith wrote:
On Thu, 2007-04-19 at 09:09 +0200, Ingo Molnar wrote:
* Mike Galbraith [EMAIL PROTECTED] wrote:
With a heavily reniced X (perfectly fine), that should indeed solve my
daily
William Lee Irwin III wrote:
I'd further recommend making priority levels accessible to kernel threads
that are not otherwise accessible to processes, both above and below
user-available priority levels. Basically, if you can get SCHED_RR and
SCHED_FIFO to coexist as intimate scheduler
On Thu, 19 Apr 2007 05:18:07 +0200 Nick Piggin <[EMAIL PROTECTED]> wrote:
> And yes, by fairly, I mean fairly among all threads as a base resource
> class, because that's what Linux has always done
Yes, there are potential compatibility problems. Example: a machine with
100 busy httpd processes
On Wed, Apr 18, 2007 at 10:49:45PM +1000, Con Kolivas wrote:
> On Wednesday 18 April 2007 22:13, Nick Piggin wrote:
> >
> > The kernel compile (make -j8 on 4 thread system) is doing 1800 total
> > context switches per second (450/s per runqueue) for cfs, and 670
> > for mainline. Going up to 20ms
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
>
>
> On Wed, 18 Apr 2007, Matt Mackall wrote:
> >
> > Why is X special? Because it does work on behalf of other processes?
> > Lots of things do this. Perhaps a scheduler should focus entirely on
> > the implicit and directed
Ingo Molnar wrote:
* Peter Williams <[EMAIL PROTECTED]> wrote:
And my scheduler for example cuts down the amount of policy code and
code size significantly.
Yours is one of the smaller patches mainly because you perpetuate (or
you did in the last one I looked at) the (horrible to my eyes)
Chris Friesen wrote:
Mark Glines wrote:
One minor question: is it even possible to be completely fair on SMP?
For instance, if you have a 2-way SMP box running 3 applications, one of
which has 2 threads, will the threaded app have an advantage here? (The
current system seems to try to keep
On Wed, 18 Apr 2007, Davide Libenzi wrote:
>
> I know, we agree there. But that did not fit my "Pirates of the Caribbean"
> quote :)
Ahh, I'm clearly not cultured enough, I didn't catch that reference.
Linus "yes, I've seen the movie, but it
apparently left more of a
Linus Torvalds wrote:
On Wed, 18 Apr 2007, Matt Mackall wrote:
On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
And "fairness by euid" is probably a hell of a lot easier to do than
trying to figure out the wakeup matrix.
For the record, you actually don't need to track a whole
On Wed, 18 Apr 2007, Linus Torvalds wrote:
> On Wed, 18 Apr 2007, Davide Libenzi wrote:
> >
> > "Perhaps on the rare occasion pursuing the right course demands an act of
> > unfairness, unfairness itself can be the right course?"
>
> I don't think that's the right issue.
>
> It's just that
On Wed, 18 Apr 2007, Ingo Molnar wrote:
> That's one reason why i dont think it's necessarily a good idea to
> group-schedule threads, we dont really want to do a per thread group
> percpu_alloc().
I still do not have clear how much overhead this will bring into the
table, but I think (like
On Wednesday 18 April 2007 22:33, Con Kolivas wrote:
> On Wednesday 18 April 2007 22:14, Nick Piggin wrote:
> > On Wed, Apr 18, 2007 at 07:33:56PM +1000, Con Kolivas wrote:
> > > On Wednesday 18 April 2007 18:55, Nick Piggin wrote:
> > > > Again, for comparison 2.6.21-rc7 mainline:
> > > >
> > > >
* Davide Libenzi <[EMAIL PROTECTED]> wrote:
> I think Ingo's idea of a new sched_group to contain the generic
> parameters needed for the "key" calculation, works better than adding
> more fields to existing strctures (that would, of course, host
> pointers to it). Otherwise I can already the
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> > perhaps a more fitting term would be 'precise group-scheduling'.
> > Within the lowest level task group entity (be that thread group or
> > uid group, etc.) 'precise scheduling' is equivalent to 'fairness'.
>
> Yes. Absolutely. Except I think
On Wed, 18 Apr 2007, Davide Libenzi wrote:
>
> "Perhaps on the rare occasion pursuing the right course demands an act of
> unfairness, unfairness itself can be the right course?"
I don't think that's the right issue.
It's just that "fairness" != "equal".
Do you think it "fair" to pay
On Wed, 18 Apr 2007, Linus Torvalds wrote:
> For example, maybe we can approximate it by spreading out the statistics:
> right now you have things like
>
> - last_ran, wait_runtime, sum_wait_runtime..
>
> be per-thread things. Maybe some of those can be spread out, so that you
> put a part
On Wed, 18 Apr 2007, Linus Torvalds wrote:
> I'm not arguing against fairness. I'm arguing against YOUR notion of
> fairness, which is obviously bogus. It is *not* fair to try to give out
> CPU time evenly!
"Perhaps on the rare occasion pursuing the right course demands an act of
unfairness,
On Wed, 18 Apr 2007, William Lee Irwin III wrote:
> Thinking of the scheduler as a CPU bandwidth allocator, this means
> handing out shares of CPU bandwidth to all users on the system, which
> in turn hand out shares of bandwidth to all sessions, which in turn
> hand out shares of bandwidth to
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> For example, maybe we can approximate it by spreading out the
> statistics: right now you have things like
>
> - last_ran, wait_runtime, sum_wait_runtime..
>
> be per-thread things. [...]
yes, yes, yes! :) My thinking is "struct sched_group"
On Wed, 18 Apr 2007, Ingo Molnar wrote:
>
> perhaps a more fitting term would be 'precise group-scheduling'. Within
> the lowest level task group entity (be that thread group or uid group,
> etc.) 'precise scheduling' is equivalent to 'fairness'.
Yes. Absolutely. Except I think that at least
Mark Glines wrote:
One minor question: is it even possible to be completely fair on SMP?
For instance, if you have a 2-way SMP box running 3 applications, one of
which has 2 threads, will the threaded app have an advantage here? (The
current system seems to try to keep each thread on a
On Wed, 18 Apr 2007, Ingo Molnar wrote:
>
> But note that most of the reported CFS interactivity wins, as surprising
> as it might be, were due to fairness between _the same user's tasks_.
And *ALL* of the CFS interactivity *losses* and complaints have been
because it did the wrong thing
On 4/18/07, Matt Mackall <[EMAIL PROTECTED]> wrote:
For the record, you actually don't need to track a whole NxN matrix
(or do the implied O(n**3) matrix inversion!) to get to the same
result. You can converge on the same node weightings (ie dynamic
priorities) by applying a damped function at
On Wed, 18 Apr 2007, Matt Mackall wrote:
> On Wed, Apr 18, 2007 at 07:48:21AM -0700, Linus Torvalds wrote:
> > And "fairness by euid" is probably a hell of a lot easier to do than
> > trying to figure out the wakeup matrix.
>
> For the record, you actually don't need to track a whole NxN matrix
1 - 100 of 623 matches
Mail list logo