In article <[EMAIL PROTECTED]> you wrote:
> a) it may do so for a short and bound time, typically less than the
> maximum acceptable latency for other tasks
if you have n threads in runq and each of them can have mhttp://vger.kernel.org/majordomo-info.html
Please read the FAQ at
In article [EMAIL PROTECTED] you wrote:
a) it may do so for a short and bound time, typically less than the
maximum acceptable latency for other tasks
if you have n threads in runq and each of them can have md (d=max latency
deadline) overhead, you will have to account on d/n slices. This
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
>>> Adjustments to the lag computation for for arrivals and departures
>>> during execution are among the missing pieces. Some algorithmic devices
>>> are also needed to account for the varying growth rates of lags of tasks
>>>
On Thu, Apr 26, 2007 at 10:57:48AM -0700, Li, Tong N wrote:
> On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
> > On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
> >
> > > Adjustments to the lag computation for for arrivals and departures
> > > during execution are
On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
> On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
>
> > Adjustments to the lag computation for for arrivals and departures
> > during execution are among the missing pieces. Some algorithmic devices
> > are also needed
On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
Adjustments to the lag computation for for arrivals and departures
during execution are among the missing pieces. Some algorithmic devices
are also needed to
On Thu, Apr 26, 2007 at 10:57:48AM -0700, Li, Tong N wrote:
On Wed, 2007-04-25 at 22:13 +0200, Willy Tarreau wrote:
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
Adjustments to the lag computation for for arrivals and departures
during execution are among the
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
Adjustments to the lag computation for for arrivals and departures
during execution are among the missing pieces. Some algorithmic devices
are also needed to account for the varying growth rates of lags of tasks
waiting to
On Tuesday 24 April 2007 16:36, Ingo Molnar wrote:
> So, my point is, the nice level of X for desktop users should not be set
> lower than a low limit suggested by that particular scheduler's author.
> That limit is scheduler-specific. Con i think recommends a nice level of
> -1 for X when using
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
> Adjustments to the lag computation for for arrivals and departures
> during execution are among the missing pieces. Some algorithmic devices
> are also needed to account for the varying growth rates of lags of tasks
>
* Li, Tong N <[EMAIL PROTECTED]> wrote:
>> [...] A corollary of this is that if both threads i and j are
>> continuously runnable with fixed weights in the time interval, then
>> the ratio of their CPU time should be equal to the ratio of their
>> weights. This definition is pretty restrictive
> > it into some xorg.conf field. (It also makes sure that X isnt preempted
> > by other userspace stuff while it does timing-sensitive operations like
> > setting the video modes up or switching video modes, etc.)
>
> X is priviledged. It can just cli around the critical section.
Not really.
* Li, Tong N <[EMAIL PROTECTED]> wrote:
> [...] A corollary of this is that if both threads i and j are
> continuously runnable with fixed weights in the time interval, then
> the ratio of their CPU time should be equal to the ratio of their
> weights. This definition is pretty restrictive
* Ray Lee <[EMAIL PROTECTED]> wrote:
> It would seem like there should be a penalty associated with sending
> those points as well, so that two processes communicating quickly with
> each other won't get into a mutual love-fest that'll capture the
> scheduler's attention.
it's not really
* Rogan Dawes <[EMAIL PROTECTED]> wrote:
> My concern was that since Ingo said that this is a closed economy,
> with a fixed sum/total, if we lose a nanosecond here and there,
> eventually we'll lose them all.
it's not a closed economy - the CPU constantly produces a resource: "CPU
cycles to
* Pavel Machek <[EMAIL PROTECTED]> wrote:
> > it into some xorg.conf field. (It also makes sure that X isnt
> > preempted by other userspace stuff while it does timing-sensitive
> > operations like setting the video modes up or switching video modes,
> > etc.)
>
> X is priviledged. It can
Hi!
> it into some xorg.conf field. (It also makes sure that X isnt preempted
> by other userspace stuff while it does timing-sensitive operations like
> setting the video modes up or switching video modes, etc.)
X is priviledged. It can just cli around the critical section.
--
(english)
On Tue, Apr 24, 2007 at 06:22:53PM -0700, Li, Tong N wrote:
> The goal of a proportional-share scheduling algorithm is to minimize the
> above metrics. If the lag function is bounded by a constant for any
> thread in any time interval, then the algorithm is considered to be
> fair. You may notice
On Tue, Apr 24, 2007 at 06:22:53PM -0700, Li, Tong N wrote:
The goal of a proportional-share scheduling algorithm is to minimize the
above metrics. If the lag function is bounded by a constant for any
thread in any time interval, then the algorithm is considered to be
fair. You may notice that
Hi!
it into some xorg.conf field. (It also makes sure that X isnt preempted
by other userspace stuff while it does timing-sensitive operations like
setting the video modes up or switching video modes, etc.)
X is priviledged. It can just cli around the critical section.
--
(english)
* Pavel Machek [EMAIL PROTECTED] wrote:
it into some xorg.conf field. (It also makes sure that X isnt
preempted by other userspace stuff while it does timing-sensitive
operations like setting the video modes up or switching video modes,
etc.)
X is priviledged. It can just cli
* Rogan Dawes [EMAIL PROTECTED] wrote:
My concern was that since Ingo said that this is a closed economy,
with a fixed sum/total, if we lose a nanosecond here and there,
eventually we'll lose them all.
it's not a closed economy - the CPU constantly produces a resource: CPU
cycles to be
* Ray Lee [EMAIL PROTECTED] wrote:
It would seem like there should be a penalty associated with sending
those points as well, so that two processes communicating quickly with
each other won't get into a mutual love-fest that'll capture the
scheduler's attention.
it's not really points,
* Li, Tong N [EMAIL PROTECTED] wrote:
[...] A corollary of this is that if both threads i and j are
continuously runnable with fixed weights in the time interval, then
the ratio of their CPU time should be equal to the ratio of their
weights. This definition is pretty restrictive since it
it into some xorg.conf field. (It also makes sure that X isnt preempted
by other userspace stuff while it does timing-sensitive operations like
setting the video modes up or switching video modes, etc.)
X is priviledged. It can just cli around the critical section.
Not really. X can
* Li, Tong N [EMAIL PROTECTED] wrote:
[...] A corollary of this is that if both threads i and j are
continuously runnable with fixed weights in the time interval, then
the ratio of their CPU time should be equal to the ratio of their
weights. This definition is pretty restrictive since it
On Wed, Apr 25, 2007 at 04:58:40AM -0700, William Lee Irwin III wrote:
Adjustments to the lag computation for for arrivals and departures
during execution are among the missing pieces. Some algorithmic devices
are also needed to account for the varying growth rates of lags of tasks
waiting to
On Tuesday 24 April 2007 16:36, Ingo Molnar wrote:
So, my point is, the nice level of X for desktop users should not be set
lower than a low limit suggested by that particular scheduler's author.
That limit is scheduler-specific. Con i think recommends a nice level of
-1 for X when using SD
> Could you explain for the audience the technical definition of
fairness
> and what sorts of error metrics are commonly used? There seems to be
> some disagreement, and you're neutral enough of an observer that your
> statement would help.
The definition for proportional fairness assumes that
On Tuesday 24 April 2007, Willy Tarreau wrote:
>On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
>> On Tuesday 24 April 2007, Ingo Molnar wrote:
>> >* David Lang <[EMAIL PROTECTED]> wrote:
>> >> > (Btw., to protect against such mishaps in the future i have changed
>> >> > the SysRq-N
On Tuesday 24 April 2007, Willy Tarreau wrote:
>On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
>> On Tuesday 24 April 2007, Ingo Molnar wrote:
>> >* David Lang <[EMAIL PROTECTED]> wrote:
>> >> > (Btw., to protect against such mishaps in the future i have changed
>> >> > the SysRq-N
Rogan Dawes wrote:
Chris Friesen wrote:
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of
nanoseconds, we'd end up with rounding errors. I'm not sure if your
algorithm will ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
In article <[EMAIL PROTECTED]> you wrote:
> Could you explain for the audience the technical definition of fairness
> and what sorts of error metrics are commonly used? There seems to be
> some disagreement, and you're neutral enough of an observer that your
> statement would help.
And while we
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> I don't know if we've discussed this or not. Since both CFS and SD claim
> to be fair, I'd like to hear more opinions on the fairness aspect of
> these designs. In areas such as OS, networking, and real-time, fairness,
> and its more
On Mon, 2007-04-23 at 18:57 -0700, Bill Huey wrote:
> On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> > I don't know if we've discussed this or not. Since both CFS and SD claim
> > to be fair, I'd like to hear more opinions on the fairness aspect of
> > these designs. In areas such
On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
> On Tuesday 24 April 2007, Ingo Molnar wrote:
> >* David Lang <[EMAIL PROTECTED]> wrote:
> >> > (Btw., to protect against such mishaps in the future i have changed
> >> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
>
Rogan Dawes wrote:
My concern was that since Ingo said that this is a closed economy, with
a fixed sum/total, if we lose a nanosecond here and there, eventually
we'll lose them all.
I assume Ingo has set it up so that the system doesn't "lose" partial
nanoseconds, but rather they'd just be
On 4/23/07, Linus Torvalds <[EMAIL PROTECTED]> wrote:
On Mon, 23 Apr 2007, Ingo Molnar wrote:
>
> The "give scheduler money" transaction can be both an "implicit
> transaction" (for example when writing to UNIX domain sockets or
> blocking on a pipe, etc.), or it could be an "explicit
Chris Friesen wrote:
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of
nanoseconds, we'd end up with rounding errors. I'm not sure if your
algorithm will ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
single context
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of nanoseconds,
we'd end up with rounding errors. I'm not sure if your algorithm will
ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
single context switch, an error of half a
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Ingo Molnar <[EMAIL PROTECTED]> wrote:
>> yeah, i guess this has little to do with X. I think in your scenario
>> it might have been smarter to either stop, or to renice the workloads
>> that took away CPU power from others to _positive_ nice levels.
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Ingo Molnar <[EMAIL PROTECTED]> wrote:
>> yeah, i guess this has little to do with X. I think in your scenario
>> it might have been smarter to either stop, or to renice the workloads
>> that took away CPU power from others to _positive_ nice levels.
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* David Lang <[EMAIL PROTECTED]> wrote:
>> > (Btw., to protect against such mishaps in the future i have changed
>> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
>> > change real-time tasks to SCHED_OTHER, but to also renice negative
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Gene Heskett <[EMAIL PROTECTED]> wrote:
>> > (Btw., to protect against such mishaps in the future i have changed
>> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
>> > change real-time tasks to SCHED_OTHER, but to also renice
Ingo Molnar wrote:
* Rogan Dawes <[EMAIL PROTECTED]> wrote:
if (p_to && p->wait_runtime > 0) {
p->wait_runtime >>= 1;
p_to->wait_runtime += p->wait_runtime;
}
the above is the basic expression of: "charge a positive bank balance".
[..]
[note,
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [...] That way you'd only have had to hit SysRq-N to get the system
> out of the wedge.)
small correction: Alt-SysRq-N.
Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL
* Rogan Dawes <[EMAIL PROTECTED]> wrote:
> >if (p_to && p->wait_runtime > 0) {
> >p->wait_runtime >>= 1;
> >p_to->wait_runtime += p->wait_runtime;
> >}
> >
> >the above is the basic expression of: "charge a positive bank balance".
> >
>
> [..]
>
* David Lang <[EMAIL PROTECTED]> wrote:
> > (Btw., to protect against such mishaps in the future i have changed
> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
> > change real-time tasks to SCHED_OTHER, but to also renice negative
> > nice levels back to 0 - this will show
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> yeah, i guess this has little to do with X. I think in your scenario
> it might have been smarter to either stop, or to renice the workloads
> that took away CPU power from others to _positive_ nice levels.
> Negative nice levels can indeed be
On Tue, 24 Apr 2007, Ingo Molnar wrote:
* Gene Heskett <[EMAIL PROTECTED]> wrote:
Gene has done some testing under CFS with X reniced to +10 and the
desktop still worked smoothly for him.
As a data point here, and probably nothing to do with X, but I did
manage to lock it up, solid, reset
* Gene Heskett <[EMAIL PROTECTED]> wrote:
> > (Btw., to protect against such mishaps in the future i have changed
> > the SysRq-N [SysRq-Nice] implementation in my tree to not only
> > change real-time tasks to SCHED_OTHER, but to also renice negative
> > nice levels back to 0 - this will
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Gene Heskett <[EMAIL PROTECTED]> wrote:
>> > Gene has done some testing under CFS with X reniced to +10 and the
>> > desktop still worked smoothly for him.
>>
>> As a data point here, and probably nothing to do with X, but I did
>> manage to lock it
* Gene Heskett <[EMAIL PROTECTED]> wrote:
> > Gene has done some testing under CFS with X reniced to +10 and the
> > desktop still worked smoothly for him.
>
> As a data point here, and probably nothing to do with X, but I did
> manage to lock it up, solid, reset button time tonight, by
Ingo Molnar wrote:
static void
yield_task_fair(struct rq *rq, struct task_struct *p, struct task_struct *p_to)
{
struct rb_node *curr, *next, *first;
struct task_struct *p_next;
/*
* yield-to support: if we are on the same runqueue then
* give half of
On Tuesday 24 April 2007, Ingo Molnar wrote:
>* Peter Williams <[EMAIL PROTECTED]> wrote:
>> > The cases are fundamentally different in behavior, because in the
>> > first case, X hardly consumes the time it would get in any scheme,
>> > while in the second case X really is CPU bound and will
* Peter Williams <[EMAIL PROTECTED]> wrote:
> > The cases are fundamentally different in behavior, because in the
> > first case, X hardly consumes the time it would get in any scheme,
> > while in the second case X really is CPU bound and will happily
> > consume any CPU time it can get.
>
Arjan van de Ven wrote:
Within reason, it's not the number of clients that X has that causes its
CPU bandwidth use to sky rocket and cause problems. It's more to to
with what type of clients they are. Most GUIs (even ones that are
constantly updating visual data (e.g. gkrellm -- I can open
Arjan van de Ven wrote:
Within reason, it's not the number of clients that X has that causes its
CPU bandwidth use to sky rocket and cause problems. It's more to to
with what type of clients they are. Most GUIs (even ones that are
constantly updating visual data (e.g. gkrellm -- I can open
* Peter Williams [EMAIL PROTECTED] wrote:
The cases are fundamentally different in behavior, because in the
first case, X hardly consumes the time it would get in any scheme,
while in the second case X really is CPU bound and will happily
consume any CPU time it can get.
Which
On Tuesday 24 April 2007, Ingo Molnar wrote:
* Peter Williams [EMAIL PROTECTED] wrote:
The cases are fundamentally different in behavior, because in the
first case, X hardly consumes the time it would get in any scheme,
while in the second case X really is CPU bound and will happily
Ingo Molnar wrote:
static void
yield_task_fair(struct rq *rq, struct task_struct *p, struct task_struct *p_to)
{
struct rb_node *curr, *next, *first;
struct task_struct *p_next;
/*
* yield-to support: if we are on the same runqueue then
* give half of
* Gene Heskett [EMAIL PROTECTED] wrote:
Gene has done some testing under CFS with X reniced to +10 and the
desktop still worked smoothly for him.
As a data point here, and probably nothing to do with X, but I did
manage to lock it up, solid, reset button time tonight, by wanting
On Tuesday 24 April 2007, Ingo Molnar wrote:
* Gene Heskett [EMAIL PROTECTED] wrote:
Gene has done some testing under CFS with X reniced to +10 and the
desktop still worked smoothly for him.
As a data point here, and probably nothing to do with X, but I did
manage to lock it up, solid,
* Gene Heskett [EMAIL PROTECTED] wrote:
(Btw., to protect against such mishaps in the future i have changed
the SysRq-N [SysRq-Nice] implementation in my tree to not only
change real-time tasks to SCHED_OTHER, but to also renice negative
nice levels back to 0 - this will show up in
On Tue, 24 Apr 2007, Ingo Molnar wrote:
* Gene Heskett [EMAIL PROTECTED] wrote:
Gene has done some testing under CFS with X reniced to +10 and the
desktop still worked smoothly for him.
As a data point here, and probably nothing to do with X, but I did
manage to lock it up, solid, reset
* David Lang [EMAIL PROTECTED] wrote:
(Btw., to protect against such mishaps in the future i have changed
the SysRq-N [SysRq-Nice] implementation in my tree to not only
change real-time tasks to SCHED_OTHER, but to also renice negative
nice levels back to 0 - this will show up in -v6.
* Ingo Molnar [EMAIL PROTECTED] wrote:
yeah, i guess this has little to do with X. I think in your scenario
it might have been smarter to either stop, or to renice the workloads
that took away CPU power from others to _positive_ nice levels.
Negative nice levels can indeed be dangerous.
* Rogan Dawes [EMAIL PROTECTED] wrote:
if (p_to p-wait_runtime 0) {
p-wait_runtime = 1;
p_to-wait_runtime += p-wait_runtime;
}
the above is the basic expression of: charge a positive bank balance.
[..]
[note, due to the
* Ingo Molnar [EMAIL PROTECTED] wrote:
[...] That way you'd only have had to hit SysRq-N to get the system
out of the wedge.)
small correction: Alt-SysRq-N.
Ingo
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
Ingo Molnar wrote:
* Rogan Dawes [EMAIL PROTECTED] wrote:
if (p_to p-wait_runtime 0) {
p-wait_runtime = 1;
p_to-wait_runtime += p-wait_runtime;
}
the above is the basic expression of: charge a positive bank balance.
[..]
[note, due to the
On Tuesday 24 April 2007, Ingo Molnar wrote:
* Gene Heskett [EMAIL PROTECTED] wrote:
(Btw., to protect against such mishaps in the future i have changed
the SysRq-N [SysRq-Nice] implementation in my tree to not only
change real-time tasks to SCHED_OTHER, but to also renice negative
nice
On Tuesday 24 April 2007, Ingo Molnar wrote:
* David Lang [EMAIL PROTECTED] wrote:
(Btw., to protect against such mishaps in the future i have changed
the SysRq-N [SysRq-Nice] implementation in my tree to not only
change real-time tasks to SCHED_OTHER, but to also renice negative
nice
On Tuesday 24 April 2007, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
yeah, i guess this has little to do with X. I think in your scenario
it might have been smarter to either stop, or to renice the workloads
that took away CPU power from others to _positive_ nice levels.
On Tuesday 24 April 2007, Ingo Molnar wrote:
* Ingo Molnar [EMAIL PROTECTED] wrote:
yeah, i guess this has little to do with X. I think in your scenario
it might have been smarter to either stop, or to renice the workloads
that took away CPU power from others to _positive_ nice levels.
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of nanoseconds,
we'd end up with rounding errors. I'm not sure if your algorithm will
ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
single context switch, an error of half a
Chris Friesen wrote:
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of
nanoseconds, we'd end up with rounding errors. I'm not sure if your
algorithm will ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
single context
On 4/23/07, Linus Torvalds [EMAIL PROTECTED] wrote:
On Mon, 23 Apr 2007, Ingo Molnar wrote:
The give scheduler money transaction can be both an implicit
transaction (for example when writing to UNIX domain sockets or
blocking on a pipe, etc.), or it could be an explicit transaction:
Rogan Dawes wrote:
My concern was that since Ingo said that this is a closed economy, with
a fixed sum/total, if we lose a nanosecond here and there, eventually
we'll lose them all.
I assume Ingo has set it up so that the system doesn't lose partial
nanoseconds, but rather they'd just be
On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
On Tuesday 24 April 2007, Ingo Molnar wrote:
* David Lang [EMAIL PROTECTED] wrote:
(Btw., to protect against such mishaps in the future i have changed
the SysRq-N [SysRq-Nice] implementation in my tree to not only
change
On Mon, 2007-04-23 at 18:57 -0700, Bill Huey wrote:
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS,
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS, networking, and real-time, fairness,
and its more
In article [EMAIL PROTECTED] you wrote:
Could you explain for the audience the technical definition of fairness
and what sorts of error metrics are commonly used? There seems to be
some disagreement, and you're neutral enough of an observer that your
statement would help.
And while we are at
Rogan Dawes wrote:
Chris Friesen wrote:
Rogan Dawes wrote:
I guess my point was if we somehow get to an odd number of
nanoseconds, we'd end up with rounding errors. I'm not sure if your
algorithm will ever allow that.
And Ingo's point was that when it takes thousands of nanoseconds for a
On Tuesday 24 April 2007, Willy Tarreau wrote:
On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
On Tuesday 24 April 2007, Ingo Molnar wrote:
* David Lang [EMAIL PROTECTED] wrote:
(Btw., to protect against such mishaps in the future i have changed
the SysRq-N [SysRq-Nice]
On Tuesday 24 April 2007, Willy Tarreau wrote:
On Tue, Apr 24, 2007 at 10:38:32AM -0400, Gene Heskett wrote:
On Tuesday 24 April 2007, Ingo Molnar wrote:
* David Lang [EMAIL PROTECTED] wrote:
(Btw., to protect against such mishaps in the future i have changed
the SysRq-N [SysRq-Nice]
Could you explain for the audience the technical definition of
fairness
and what sorts of error metrics are commonly used? There seems to be
some disagreement, and you're neutral enough of an observer that your
statement would help.
The definition for proportional fairness assumes that each
> Within reason, it's not the number of clients that X has that causes its
> CPU bandwidth use to sky rocket and cause problems. It's more to to
> with what type of clients they are. Most GUIs (even ones that are
> constantly updating visual data (e.g. gkrellm -- I can open quite a
> large
Linus Torvalds wrote:
On Mon, 23 Apr 2007, Ingo Molnar wrote:
The "give scheduler money" transaction can be both an "implicit
transaction" (for example when writing to UNIX domain sockets or
blocking on a pipe, etc.), or it could be an "explicit transaction":
sched_yield_to(). This latter
On Mon, Apr 23, 2007 at 05:59:06PM -0700, Li, Tong N wrote:
> I don't know if we've discussed this or not. Since both CFS and SD claim
> to be fair, I'd like to hear more opinions on the fairness aspect of
> these designs. In areas such as OS, networking, and real-time, fairness,
> and its more
I don't know if we've discussed this or not. Since both CFS and SD claim
to be fair, I'd like to hear more opinions on the fairness aspect of
these designs. In areas such as OS, networking, and real-time, fairness,
and its more general form, proportional fairness, are well-defined
terms. In fact,
Linus Torvalds wrote:
> The "perfect" situation would be that when somebody goes to sleep, any
> extra points it had could be given to whoever it woke up last. Note that
> for something like X, it means that the points are 100% ephemeral: it gets
> points when a client sends it a request, but
2007/4/23, Ingo Molnar <[EMAIL PROTECTED]>:
p->wait_runtime >>= 1;
p_to->wait_runtime += p->wait_runtime;
I have no problem with clients giving some credit to X,
I am more concerned with X giving half of its credit to
a single client, a quarter of its credit to
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> sorry, i was a bit imprecise here. There is a case where CFS can give
> out a 'loan' to tasks. The scheduler tick has a low resolution, so it
> is fundamentally inevitable [*] that tasks will run a bit more than
> they should, and at a heavy
* Ingo Molnar <[EMAIL PROTECTED]> wrote:
> (we obviously dont want to allow people to 'share' their loans with
> others ;), nor do we want to allow a net negative balance. CFS is
> really brutally cold-hearted, it has a strict 'no loans' policy - the
> easiest economic way to manage
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> > The "give scheduler money" transaction can be both an "implicit
> > transaction" (for example when writing to UNIX domain sockets or
> > blocking on a pipe, etc.), or it could be an "explicit transaction":
> > sched_yield_to(). This latter i've
On Mon, 23 Apr 2007, Ingo Molnar wrote:
>
> The "give scheduler money" transaction can be both an "implicit
> transaction" (for example when writing to UNIX domain sockets or
> blocking on a pipe, etc.), or it could be an "explicit transaction":
> sched_yield_to(). This latter i've already
Hi !
On Mon, Apr 23, 2007 at 09:11:43PM +0200, Ingo Molnar wrote:
>
> * Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
> > but the point I'm trying to make is that X shouldn't get more CPU-time
> > because it's "more important" (it's not: and as noted earlier,
> > thinking that it's more
* Linus Torvalds <[EMAIL PROTECTED]> wrote:
> but the point I'm trying to make is that X shouldn't get more CPU-time
> because it's "more important" (it's not: and as noted earlier,
> thinking that it's more important skews the problem and makes for too
> *much* scheduling). X should get more
On Mon, 23 Apr 2007, Nick Piggin wrote:
> > If you have a single client, the X server is *not* more important than the
> > client, and indeed, renicing the X server causes bad patterns: just
> > because the client sends a request does not mean that the X server should
> > immediately be given
On Mon, 23 Apr 2007, Nick Piggin wrote:
If you have a single client, the X server is *not* more important than the
client, and indeed, renicing the X server causes bad patterns: just
because the client sends a request does not mean that the X server should
immediately be given the CPU
1 - 100 of 204 matches
Mail list logo