On Tue, Apr 24, 2007 at 08:50:20AM -0700, Ray Lee wrote:
> > Firstly, lots of clients in your list are remote. X usually isn't.
>
> They really aren't, unless you happen to work somewhere that can afford
> to dedicate a box to a db, which suddenly makes the scheduler a dull
> topic.
>
> For
Nick Piggin wrote:
> On Thu, Apr 19, 2007 at 12:26:03PM -0700, Ray Lee wrote:
>> On 4/19/07, Con Kolivas <[EMAIL PROTECTED]> wrote:
>>> The one fly in the ointment for
>>> linux remains X. I am still, to this moment, completely and utterly stunned
>>> at why everyone is trying to find increasingly
Nick Piggin wrote:
On Thu, Apr 19, 2007 at 12:26:03PM -0700, Ray Lee wrote:
On 4/19/07, Con Kolivas [EMAIL PROTECTED] wrote:
The one fly in the ointment for
linux remains X. I am still, to this moment, completely and utterly stunned
at why everyone is trying to find increasingly complex
On Tue, Apr 24, 2007 at 08:50:20AM -0700, Ray Lee wrote:
Firstly, lots of clients in your list are remote. X usually isn't.
They really aren't, unless you happen to work somewhere that can afford
to dedicate a box to a db, which suddenly makes the scheduler a dull
topic.
For example, I
On Sunday 22 April 2007 22:54, Mark Lord wrote:
> Just to throw another possibly-overlooked variable into the mess:
>
> My system here is using the on-demand cpufreq policy governor.
> I wonder how that interacts with the various schedulers here?
>
> I suppose for the "make" kernel case, after a
Just to throw another possibly-overlooked variable into the mess:
My system here is using the on-demand cpufreq policy governor.
I wonder how that interacts with the various schedulers here?
I suppose for the "make" kernel case, after a couple of seconds
the cpufreq would hit max and stay there
Just to throw another possibly-overlooked variable into the mess:
My system here is using the on-demand cpufreq policy governor.
I wonder how that interacts with the various schedulers here?
I suppose for the make kernel case, after a couple of seconds
the cpufreq would hit max and stay there
On Sunday 22 April 2007 22:54, Mark Lord wrote:
Just to throw another possibly-overlooked variable into the mess:
My system here is using the on-demand cpufreq policy governor.
I wonder how that interacts with the various schedulers here?
I suppose for the make kernel case, after a couple of
Nick Piggin wrote:
On Thu, Apr 19, 2007 at 09:17:25AM -0400, Mark Lord wrote:
Just plain "make" (no -j2 or -j) is enough to kill interactivity
on my 2GHz P-M single-core non-HT machine with SD.
Is this with or without X reniced?
That was with no manual jiggling, everything the same as
Nick Piggin wrote:
On Thu, Apr 19, 2007 at 09:17:25AM -0400, Mark Lord wrote:
Just plain make (no -j2 or -j) is enough to kill interactivity
on my 2GHz P-M single-core non-HT machine with SD.
Is this with or without X reniced?
That was with no manual jiggling, everything the same as
On Fri, Apr 20, 2007 at 12:12:29AM -0700, Michael K. Edwards wrote:
> Actual fractional CPU reservation is a bit different, and is probably
> best handled with "container"-type infrastructure (not quite
> virtualization, but not quite scheduling classes either). SGI
> pioneered this (in "open
On 4/19/07, hui Bill Huey <[EMAIL PROTECTED]> wrote:
DSP operations like, particularly with digital synthesis, tend to max
the CPU doing vector operations on as many processors as it can get
a hold of. In a live performance critical application, it's important
to be able to deliver a protected
On 4/19/07, hui Bill Huey [EMAIL PROTECTED] wrote:
DSP operations like, particularly with digital synthesis, tend to max
the CPU doing vector operations on as many processors as it can get
a hold of. In a live performance critical application, it's important
to be able to deliver a protected
On Fri, Apr 20, 2007 at 12:12:29AM -0700, Michael K. Edwards wrote:
Actual fractional CPU reservation is a bit different, and is probably
best handled with container-type infrastructure (not quite
virtualization, but not quite scheduling classes either). SGI
pioneered this (in open systems
On Thu, Apr 19, 2007 at 05:20:53PM -0700, Michael K. Edwards wrote:
> Embedded systems are already in 2007, and the mainline Linux scheduler
> frankly sucks on them, because it thinks it's back in the 1960's with
> a fixed supply and captive demand, pissing away "CPU bandwidth" as
> waste heat.
On Thu, Apr 19, 2007 at 06:32:15PM -0700, Michael K. Edwards wrote:
> But I think SCHED_FIFO on a chain of tasks is fundamentally not the
> right way to handle low audio latency. The object with a low latency
> requirement isn't the task, it's the device. When it's starting to
> get urgent to
On Fri, 2007-04-20 at 08:47 +1000, Con Kolivas wrote:
> It's those who want X to have an unfair advantage that want it to do
> something "special".
I hope you're not lumping me in with "those". If X + client had been
able to get their fair share and do so in the low latency manner they
need, I
On Thu, Apr 19, 2007 at 12:26:03PM -0700, Ray Lee wrote:
> On 4/19/07, Con Kolivas <[EMAIL PROTECTED]> wrote:
> >The one fly in the ointment for
> >linux remains X. I am still, to this moment, completely and utterly stunned
> >at why everyone is trying to find increasingly complex unique ways to
On Thu, Apr 19, 2007 at 09:17:25AM -0400, Mark Lord wrote:
> Con Kolivas wrote:
> s go ahead and think up great ideas for other ways of metering out cpu
> >bandwidth for different purposes, but for X, given the absurd simplicity
> >of renicing, why keep fighting it? Again I reiterate that most
On Thursday 19 April 2007, Con Kolivas wrote:
>On Friday 20 April 2007 04:16, Gene Heskett wrote:
>> On Thursday 19 April 2007, Con Kolivas wrote:
>>
>> [and I snipped a good overview]
>>
>> >So yes go ahead and think up great ideas for other ways of metering out
>> > cpu bandwidth for different
On Thursday 19 April 2007, Con Kolivas wrote:
>On Friday 20 April 2007 04:16, Gene Heskett wrote:
>> On Thursday 19 April 2007, Con Kolivas wrote:
>>
>> [and I snipped a good overview]
>>
>> >So yes go ahead and think up great ideas for other ways of metering out
>> > cpu bandwidth for different
On 4/19/07, Lee Revell <[EMAIL PROTECTED]> wrote:
IMHO audio streamers should use SCHED_FIFO thread for time critical
work. I think it's insane to expect the scheduler to figure out that
these processes need low latency when they can just be explicit about
it. "Professional" audio software
On Thu, 19 Apr 2007, Ed Tomlinson wrote:
> >
> > SD just doesn't do nearly as good as the stock scheduler, or CFS, here.
> >
> > I'm quite likely one of the few single-CPU/non-HT testers of this stuff.
> > If it should ever get more widely used I think we'd hear a lot more
> > complaints.
>
On Thursday 19 April 2007 12:15, Mark Lord wrote:
> Con Kolivas wrote:
> > On Thursday 19 April 2007 23:17, Mark Lord wrote:
> >> Con Kolivas wrote:
> >> s go ahead and think up great ideas for other ways of metering out cpu
> >>
> >>> bandwidth for different purposes, but for X, given the absurd
Con Kolivas wrote:
> You're welcome and thanks for taking the floor to speak. I would say you have
> actually agreed with me though. X is not unique, it's just an obvious so
> let's not design the cpu scheduler around the problem with X. Same goes for
> every other application. Leaving the
On 4/19/07, Con Kolivas <[EMAIL PROTECTED]> wrote:
The cpu scheduler core is a cpu bandwidth and latency
proportionator and should be nothing more or less.
Not really. The CPU scheduler is (or ought to be) what electric
utilities call an economic dispatch mechanism -- a real-time
controller
On Friday 20 April 2007 02:15, Mark Lord wrote:
> Con Kolivas wrote:
> > On Thursday 19 April 2007 23:17, Mark Lord wrote:
> >> Con Kolivas wrote:
> >> s go ahead and think up great ideas for other ways of metering out cpu
> >>
> >>> bandwidth for different purposes, but for X, given the absurd
>
On Friday 20 April 2007 05:26, Ray Lee wrote:
> On 4/19/07, Con Kolivas <[EMAIL PROTECTED]> wrote:
> > The one fly in the ointment for
> > linux remains X. I am still, to this moment, completely and utterly
> > stunned at why everyone is trying to find increasingly complex unique
> > ways to
On Friday 20 April 2007 04:16, Gene Heskett wrote:
> On Thursday 19 April 2007, Con Kolivas wrote:
>
> [and I snipped a good overview]
>
> >So yes go ahead and think up great ideas for other ways of metering out
> > cpu bandwidth for different purposes, but for X, given the absurd
> > simplicity
On 4/19/07, Gene Heskett <[EMAIL PROTECTED]> wrote:
Having tried re-nicing X a while back, and having the rest of the system
suffer in quite obvious ways for even 1 + or - from its default felt pretty
bad from this users perspective.
It is my considered opinion (yeah I know, I'm just a leaf in
On 4/19/07, Con Kolivas <[EMAIL PROTECTED]> wrote:
The one fly in the ointment for
linux remains X. I am still, to this moment, completely and utterly stunned
at why everyone is trying to find increasingly complex unique ways to manage
X when all it needs is more cpu[1].
[...and hence should be
On Thursday 19 April 2007, Mark Lord wrote:
>Con Kolivas wrote:
>> On Thursday 19 April 2007 23:17, Mark Lord wrote:
>>> Con Kolivas wrote:
>>> s go ahead and think up great ideas for other ways of metering out cpu
>>>
bandwidth for different purposes, but for X, given the absurd simplicity
On Thursday 19 April 2007, Con Kolivas wrote:
[and I snipped a good overview]
>So yes go ahead and think up great ideas for other ways of metering out cpu
>bandwidth for different purposes, but for X, given the absurd simplicity of
>renicing, why keep fighting it? Again I reiterate that most
Con Kolivas wrote:
On Thursday 19 April 2007 23:17, Mark Lord wrote:
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity
of renicing, why keep fighting it? Again I reiterate that
Con Kolivas wrote:
Ok, there are 3 known schedulers currently being "promoted" as solid
replacements for the mainline scheduler which address most of the issues with
mainline (and about 10 other ones not currently being promoted). The main way
they do this is through attempting to maintain
On Thursday 19 April 2007 23:17, Mark Lord wrote:
> Con Kolivas wrote:
> s go ahead and think up great ideas for other ways of metering out cpu
>
> > bandwidth for different purposes, but for X, given the absurd simplicity
> > of renicing, why keep fighting it? Again I reiterate that most users of
On 4/19/07, Peter Williams <[EMAIL PROTECTED]> wrote:
PS I think that the tasks most likely to be adversely effected by X's
CPU storms (enough to annoy the user) are audio streamers so when you're
doing tests to determine the best nice value for X I suggest that would
be a good criterion. Video
Peter Williams wrote:
Con Kolivas wrote:
Ok, there are 3 known schedulers currently being "promoted" as solid
replacements for the mainline scheduler which address most of the
issues with mainline (and about 10 other ones not currently being
promoted). The main way they do this is through
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity of
renicing, why keep fighting it? Again I reiterate that most users of SD have
not found the need to renice X anyway except if
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity of
renicing, why keep fighting it? Again I reiterate that most users of SD have
not found the need to renice X anyway except if
Peter Williams wrote:
Con Kolivas wrote:
Ok, there are 3 known schedulers currently being promoted as solid
replacements for the mainline scheduler which address most of the
issues with mainline (and about 10 other ones not currently being
promoted). The main way they do this is through
On 4/19/07, Peter Williams [EMAIL PROTECTED] wrote:
PS I think that the tasks most likely to be adversely effected by X's
CPU storms (enough to annoy the user) are audio streamers so when you're
doing tests to determine the best nice value for X I suggest that would
be a good criterion. Video
On Thursday 19 April 2007 23:17, Mark Lord wrote:
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity
of renicing, why keep fighting it? Again I reiterate that most users of
SD
Con Kolivas wrote:
Ok, there are 3 known schedulers currently being promoted as solid
replacements for the mainline scheduler which address most of the issues with
mainline (and about 10 other ones not currently being promoted). The main way
they do this is through attempting to maintain solid
Con Kolivas wrote:
On Thursday 19 April 2007 23:17, Mark Lord wrote:
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity
of renicing, why keep fighting it? Again I reiterate that
On Thursday 19 April 2007, Con Kolivas wrote:
[and I snipped a good overview]
So yes go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity of
renicing, why keep fighting it? Again I reiterate that most users
On Thursday 19 April 2007, Mark Lord wrote:
Con Kolivas wrote:
On Thursday 19 April 2007 23:17, Mark Lord wrote:
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity
of renicing, why
On 4/19/07, Con Kolivas [EMAIL PROTECTED] wrote:
The one fly in the ointment for
linux remains X. I am still, to this moment, completely and utterly stunned
at why everyone is trying to find increasingly complex unique ways to manage
X when all it needs is more cpu[1].
[...and hence should be
On 4/19/07, Gene Heskett [EMAIL PROTECTED] wrote:
Having tried re-nicing X a while back, and having the rest of the system
suffer in quite obvious ways for even 1 + or - from its default felt pretty
bad from this users perspective.
It is my considered opinion (yeah I know, I'm just a leaf in
On Friday 20 April 2007 04:16, Gene Heskett wrote:
On Thursday 19 April 2007, Con Kolivas wrote:
[and I snipped a good overview]
So yes go ahead and think up great ideas for other ways of metering out
cpu bandwidth for different purposes, but for X, given the absurd
simplicity of
On Friday 20 April 2007 05:26, Ray Lee wrote:
On 4/19/07, Con Kolivas [EMAIL PROTECTED] wrote:
The one fly in the ointment for
linux remains X. I am still, to this moment, completely and utterly
stunned at why everyone is trying to find increasingly complex unique
ways to manage X when
On Friday 20 April 2007 02:15, Mark Lord wrote:
Con Kolivas wrote:
On Thursday 19 April 2007 23:17, Mark Lord wrote:
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd
simplicity of
On 4/19/07, Con Kolivas [EMAIL PROTECTED] wrote:
The cpu scheduler core is a cpu bandwidth and latency
proportionator and should be nothing more or less.
Not really. The CPU scheduler is (or ought to be) what electric
utilities call an economic dispatch mechanism -- a real-time
controller
Con Kolivas wrote:
You're welcome and thanks for taking the floor to speak. I would say you have
actually agreed with me though. X is not unique, it's just an obvious so
let's not design the cpu scheduler around the problem with X. Same goes for
every other application. Leaving the choice
On Thursday 19 April 2007 12:15, Mark Lord wrote:
Con Kolivas wrote:
On Thursday 19 April 2007 23:17, Mark Lord wrote:
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity
of
On Thu, 19 Apr 2007, Ed Tomlinson wrote:
SD just doesn't do nearly as good as the stock scheduler, or CFS, here.
I'm quite likely one of the few single-CPU/non-HT testers of this stuff.
If it should ever get more widely used I think we'd hear a lot more
complaints.
amd64 UP
On 4/19/07, Lee Revell [EMAIL PROTECTED] wrote:
IMHO audio streamers should use SCHED_FIFO thread for time critical
work. I think it's insane to expect the scheduler to figure out that
these processes need low latency when they can just be explicit about
it. Professional audio software does it
On Thursday 19 April 2007, Con Kolivas wrote:
On Friday 20 April 2007 04:16, Gene Heskett wrote:
On Thursday 19 April 2007, Con Kolivas wrote:
[and I snipped a good overview]
So yes go ahead and think up great ideas for other ways of metering out
cpu bandwidth for different purposes, but
On Thursday 19 April 2007, Con Kolivas wrote:
On Friday 20 April 2007 04:16, Gene Heskett wrote:
On Thursday 19 April 2007, Con Kolivas wrote:
[and I snipped a good overview]
So yes go ahead and think up great ideas for other ways of metering out
cpu bandwidth for different purposes, but
On Thu, Apr 19, 2007 at 09:17:25AM -0400, Mark Lord wrote:
Con Kolivas wrote:
s go ahead and think up great ideas for other ways of metering out cpu
bandwidth for different purposes, but for X, given the absurd simplicity
of renicing, why keep fighting it? Again I reiterate that most users
On Thu, Apr 19, 2007 at 12:26:03PM -0700, Ray Lee wrote:
On 4/19/07, Con Kolivas [EMAIL PROTECTED] wrote:
The one fly in the ointment for
linux remains X. I am still, to this moment, completely and utterly stunned
at why everyone is trying to find increasingly complex unique ways to
manage
On Fri, 2007-04-20 at 08:47 +1000, Con Kolivas wrote:
It's those who want X to have an unfair advantage that want it to do
something special.
I hope you're not lumping me in with those. If X + client had been
able to get their fair share and do so in the low latency manner they
need, I would
On Thu, Apr 19, 2007 at 06:32:15PM -0700, Michael K. Edwards wrote:
But I think SCHED_FIFO on a chain of tasks is fundamentally not the
right way to handle low audio latency. The object with a low latency
requirement isn't the task, it's the device. When it's starting to
get urgent to
On Thu, Apr 19, 2007 at 05:20:53PM -0700, Michael K. Edwards wrote:
Embedded systems are already in 2007, and the mainline Linux scheduler
frankly sucks on them, because it thinks it's back in the 1960's with
a fixed supply and captive demand, pissing away CPU bandwidth as
waste heat. Not to
64 matches
Mail list logo