Jack O'Quin wrote:
Werner Almesberger <[EMAIL PROTECTED]> writes:
[ Cc:s trimmed, added abiss-general ]
Con Kolivas wrote:
Possibly reiserfs journal related. That has larger non-preemptible code
sections.
If I understand your workload right, it should consist mainly of
computation, networking (?)
Werner Almesberger <[EMAIL PROTECTED]> writes:
> [ Cc:s trimmed, added abiss-general ]
>
> Con Kolivas wrote:
>> Possibly reiserfs journal related. That has larger non-preemptible code
>> sections.
>
> If I understand your workload right, it should consist mainly of
> computation, networking (?),
[ Cc:s trimmed, added abiss-general ]
Con Kolivas wrote:
> Possibly reiserfs journal related. That has larger non-preemptible code
> sections.
If I understand your workload right, it should consist mainly of
computation, networking (?), and disk reads.
I don't know much about ReiserFS, but in s
Jack O'Quin wrote:
>
> If you grep your log file for 'client failure:', you'll probably find
> that JACK has reacted to the deteriorating situation by shutting down
> some of its clients. The number of 'client failure:' messages is
> *not* the number of clients shut down, there is some repetition
Con Kolivas <[EMAIL PROTECTED]> writes:
> There were numerous bugs in the SCHED_ISO design prior to now, so it
> really was not performing as expected. What is most interesting is
> that the DSP load goes to much higher levels now if xruns are avoided
> and stay at those high levels. If I push the
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> I was just pointing out that saying nice(-20) works as well as
> SCHED_ISO, though true, doesn't mean much since neither of them
> (currently) work well enough to be useful.
ok. While i still think nice--20 can be quite good for some purposes, it
will p
Con Kolivas wrote:
There were numerous bugs in the SCHED_ISO design prior to now, so it
really was not performing as expected. What is most interesting is that
the DSP load goes to much higher levels now if xruns are avoided and
stay at those high levels. If I push the cpu load too much so that
There were numerous bugs in the SCHED_ISO design prior to now, so it
really was not performing as expected. What is most interesting is that
the DSP load goes to much higher levels now if xruns are avoided and
stay at those high levels. If I push the cpu load too much so that they
get transient
Ingo Molnar <[EMAIL PROTECTED]> writes:
> * Jack O'Quin <[EMAIL PROTECTED]> wrote:
>
>> First, only SCHED_FIFO worked reliably in my tests. In Con's tests
>> even that did not work. My system is probably better tuned for low
>> latency than his. Until we can determine why there were so ma
Con Kolivas wrote:
-cc list trimmed to those who have recently responded.
Here is a patch to go on top of 2.6.11-rc2-mm1 that fixes some bugs in
the general SCHED_ISO code, fixes the priority support between ISO
threads, and implements SCHED_ISO_RR and SCHED_ISO_FIFO as separate
policies. Note t
-cc list trimmed to those who have recently responded.
Here is a patch to go on top of 2.6.11-rc2-mm1 that fixes some bugs in
the general SCHED_ISO code, fixes the priority support between ISO
threads, and implements SCHED_ISO_RR and SCHED_ISO_FIFO as separate
policies. Note the bugfixes and cle
Jack O'Quin wrote:
I still wonder if some coding error might occasionally be letting a
lower priority process continue running after an interrupt when it
ought to be preempted.
Well not surprisingly I did find a bug in my patch which did not honour
priority support between ISO threads. So basicall
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> Has anyone done this kind of realtime testing on an SMP system? I'd
> love to know how they compare. Unfortunately, I don't have access to
> one at the moment. Are they generally better or worse for this kind
> of work? I'm not asking about partition
* Paolo Ciarrocchi <[EMAIL PROTECTED]> wrote:
> On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [...]
> > - CKRM is another possibility, and has nonzero costs as well, but solves
> > a wider range of problems.
>
> BTW, do you know what's the status of CKRM ? If I'm
Paolo Ciarrocchi wrote:
On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar <[EMAIL PROTECTED]> wrote:
[...]
- CKRM is another possibility, and has nonzero costs as well, but solves
a wider range of problems.
BTW, do you know what's the status of CKRM ?
If I'm not wrong it is already widely used, is t
On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar <[EMAIL PROTECTED]> wrote:
[...]
> - CKRM is another possibility, and has nonzero costs as well, but solves
> a wider range of problems.
BTW, do you know what's the status of CKRM ?
If I'm not wrong it is already widely used, is there any plan to pu
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> First, only SCHED_FIFO worked reliably in my tests. In Con's tests
> even that did not work. My system is probably better tuned for low
> latency than his. Until we can determine why there were so many
> xruns, it is premature to declare victo
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
>> I'll try building a SCHED_RR version of JACK. I still don't think it
>> will make any difference. But my intuition isn't working very well
>> right now, so I need more data.
>
> Could be that despite what it appears, FIFO behaviour
Ingo Molnar <[EMAIL PROTECTED]> writes:
> just finished a short testrun with nice--20 compared to SCHED_FIFO, on a
> relatively slow 466 MHz box:
Has anyone done this kind of realtime testing on an SMP system? I'd
love to know how they compare. Unfortunately, I don't have access to
one at the m
Jack O'Quin wrote:
I'll try building a SCHED_RR version of JACK. I still don't think it
will make any difference. But my intuition isn't working very well
right now, so I need more data.
Could be that despite what it appears, FIFO behaviour may be desirable
to RR. Also the RR in SCHED_ISO is pre
Jack O'Quin <[EMAIL PROTECTED]> writes:
> Will post the correct numbers shortly. Sorry for the screw-up.
Here they are...
http://www.joq.us/jack/benchmarks/sched-isoprio
http://www.joq.us/jack/benchmarks/sched-isoprio+compile
I moved the previous runs to the sched-fifo* directories where t
Jack O'Quin <[EMAIL PROTECTED]> writes:
> These results are indistinguishable from SCHED_FIFO...
Disregard my previous message, it was an idiotic mistake. The results
were indistinguishable form SCHED_FIFO because they *were* SCHED_FIFO.
I'm running everything again, this time with the correct s
Con Kolivas <[EMAIL PROTECTED]> writes:
>>>Second the patch I sent you is fine for testing; I was hoping you
>>>would try it. What you can't do with it is spawn lots of userspace
>>>apps safely SCHED_ISO with it - it will crash, but it not take down
>>>your hard disk. I've had significantly better
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
There are two things that the SCHED_ISO you tried is not that
SCHED_FIFO is - As you mentioned there is no priority support, and it
is RR, not FIFO. I am not sure whether it is one and or the other
responsible. Both can be added to SCHED_I
Con Kolivas <[EMAIL PROTECTED]> writes:
> There are two things that the SCHED_ISO you tried is not that
> SCHED_FIFO is - As you mentioned there is no priority support, and it
> is RR, not FIFO. I am not sure whether it is one and or the other
> responsible. Both can be added to SCHED_ISO. I haven
Jack O'Quin wrote:
Looked at this way, there really is no question. The new scheduler
prototypes are falling short significantly. Could this be due to
their lack of priority distinctions between realtime threads? Maybe.
I can't say for sure. I'll be interested to see what happens when Con
is re
Ingo Molnar <[EMAIL PROTECTED]> writes:
> thanks for the testing. The important result is that nice--20
> performance is roughly the same as SCHED_ISO. This somewhat
> reduces the urgency of the introduction of SCHED_ISO.
Doing more runs and a more thorough analysis has driven me to a
different c
>Yup, modern must be the key. Even Ingo can't help my little ole PIII/500
>with YMF-740C. Dang thing can't handle -p64 (alsa rejects that, causing
>jackd to become terminally upset), and it can't even handle 4 clients at
>SCHED_FIFO despite latest/greatest RT preempt kernel without xruns.
>
>B
At 03:50 PM 1/23/2005 +1100, Con Kolivas wrote:
Looks like the number of steps to convert a modern "standard setup"
desktop to a low latency one on linux aren't that big after all :)
Yup, modern must be the key. Even Ingo can't help my little ole PIII/500
with YMF-740C. Dang thing can't handle
Jack O'Quin wrote:
I'm wondering now if the lack of priority support in the two
prototypes might explain the problems I'm seeing.
Distinctly possible since my results got better with priority support.
However I'm still bugfixing what I've got. Just as a data point here is
an incremental patch for
Jack O'Quin <[EMAIL PROTECTED]> writes:
>
> I ran three sets of tests with three or more 5 minute runs for each
> case. The results (log files and graphs) are in these directories...
>
> 1) sched-fifo -- as a baseline
> http://www.joq.us/jack/benchmarks/sched-fifo
>
> 2) sched-iso -- Con'
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Jack O'Quin wrote:
[snip lots of valid points]
suggest some things to try. First, make sure the JACK tmp directory
is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
Looks like the tmpfs is probably the biggest problem. H
Con Kolivas <[EMAIL PROTECTED]> writes:
> Meanwhile, I have the priority support working (but not bug free), and
> the preliminary results suggest that the results are better. Do I
> recall someone mentioning jackd uses threads at different priority?
Yes, it does.
I'm not sure whether that mat
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
> [snip lots of valid points]
>> suggest some things to try. First, make sure the JACK tmp directory
>> is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
>
> Looks like the tmpfs is probably the biggest problem. Here's
Jack O'Quin wrote:
[snip lots of valid points]
suggest some things to try. First, make sure the JACK tmp directory
is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
Looks like the tmpfs is probably the biggest problem. Here's SCHED_ISO
with just the /tmp mounted on tmpfs change
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
>> Neither run exhibits reliable audio performance. There is some low
>> latency performance problem with your system. Maybe ReiserFS is
>> causing trouble even with logging turned off. Perhaps the problem is
>> somewhere else. Maybe
* Nick Piggin ([EMAIL PROTECTED]) wrote:
> Jack O'Quin wrote:
>
> > Chris Wright and Arjan van de Ven have outlined a proposal to address
> > the privilege issue using rlimits. This is still the only workable
> > alternative to the realtime LSM on the table. If the decision were up
> > to me, I
Jack O'Quin wrote:
Chris Wright and Arjan van de Ven have outlined a proposal to address
the privilege issue using rlimits. This is still the only workable
alternative to the realtime LSM on the table. If the decision were up
to me, I would choose the simplicity and better security of the LSM.
Bu
Paul Davis wrote:
The idea is to get equivalent performance to SCHED_FIFO. The results
show that much, and it is 100 times better than unprivileged
SCHED_NORMAL. The fact that this is an unoptimised normal desktop
environment means that the conclusion we _can_ draw is that SCHED_ISO is
as good
>The idea is to get equivalent performance to SCHED_FIFO. The results
>show that much, and it is 100 times better than unprivileged
>SCHED_NORMAL. The fact that this is an unoptimised normal desktop
>environment means that the conclusion we _can_ draw is that SCHED_ISO is
>as good as SCHED_FIFO
Jack O'Quin wrote:
Neither run exhibits reliable audio performance. There is some low
latency performance problem with your system. Maybe ReiserFS is
causing trouble even with logging turned off. Perhaps the problem is
somewhere else. Maybe some device is misbehaving.
Until you solve this probl
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
So let's try again, sorry about the noise:
==> jack_test4-2.6.11-rc1-mm2-fifo.log <==
*
XRUN Count . . . . . . . . . : 3
Delay Maximum . . . . . . . . : 20161 usecs
***
Ingo Molnar <[EMAIL PROTECTED]> writes:
> thanks for the testing. The important result is that nice--20
> performance is roughly the same as SCHED_ISO. This somewhat
> reduces the urgency of the introduction of SCHED_ISO.
I can see why you feel that way, but don't share your conclusion.
First,
Con Kolivas <[EMAIL PROTECTED]> writes:
> So let's try again, sorry about the noise:
>
> ==> jack_test4-2.6.11-rc1-mm2-fifo.log <==
> *
> XRUN Count . . . . . . . . . : 3
> Delay Maximum . . . . . . . . : 20161 usecs
>
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> I finally made new kernel builds for the latest patches from both Ingo
> and Con. I kept the two patch sets separate, as they modify some of
> the same files.
>
> I ran three sets of tests with three or more 5 minute runs for each
> case. The results
>> "Jack" == Jack O'Quin <[EMAIL PROTECTED]> writes:
>
>
>Jack> Looks like we need to do another study to determine which
>Jack> filesystem works best for multi-track audio recording and
>Jack> playback. XFS looks promising, but only if they get the latency
>Jack> right. Any experience with t
Con Kolivas wrote:
Con Kolivas wrote:
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the behaviour
Con Kolivas wrote:
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the behaviour
under iso throttli
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the behaviour
under iso throttling.
What version o
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the behaviour
under iso throttling.
What version o
Con Kolivas <[EMAIL PROTECTED]> writes:
> Here's fresh results on more stressed hardware (on ext3) with
> 2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
> hovering at 50% spikes at times close to 70 which tests the behaviour
> under iso throttling.
What version of JACK are
Con Kolivas <[EMAIL PROTECTED]> writes:
> As for priority support, I have been working on it. While the test
> cases I've been involved in show no need for it, I can understand why
> it would be desirable.
Yes. Rui's jack_test3.2 does not require multiple realtime
priorities, but I can point to
utz lehmann wrote:
On Sat, 2005-01-22 at 10:48 +1100, Con Kolivas wrote:
utz lehmann wrote:
Hi
I dislike the behavior of the SCHED_ISO patch that iso tasks are
degraded to SCHED_NORMAL if they exceed the limit.
IMHO it's better to throttle them at the iso_cpu limit.
I have modified Con's iso2 patch
Ingo Molnar <[EMAIL PROTECTED]> writes:
> just finished a short testrun with nice--20 compared to SCHED_FIFO, on a
> relatively slow 466 MHz box:
> this shows the surprising result that putting all RT tasks on nice--20
> reduced context-switch rate by 20% and the Delay Maximum is lower as
> well.
On Sat, 2005-01-22 at 10:48 +1100, Con Kolivas wrote:
> utz lehmann wrote:
> > Hi
> >
> > I dislike the behavior of the SCHED_ISO patch that iso tasks are
> > degraded to SCHED_NORMAL if they exceed the limit.
> > IMHO it's better to throttle them at the iso_cpu limit.
> >
> > I have modified Con
Rui Nuno Capela wrote:
OK. Here goes my fresh and newly jack_test4.1 test suite. It might be
still rough, as usual ;)
Thanks
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to
Hi
I dislike the behavior of the SCHED_ISO patch that iso tasks are
degraded to SCHED_NORMAL if they exceed the limit.
IMHO it's better to throttle them at the iso_cpu limit.
I have modified Con's iso2 patch to do this. If iso_cpu > 50 iso tasks
only get stalled for 1 tick (1ms on x86).
Fortunat
utz lehmann wrote:
Hi
I dislike the behavior of the SCHED_ISO patch that iso tasks are
degraded to SCHED_NORMAL if they exceed the limit.
IMHO it's better to throttle them at the iso_cpu limit.
I have modified Con's iso2 patch to do this. If iso_cpu > 50 iso tasks
only get stalled for 1 tick (1ms o
Con Kolivas <[EMAIL PROTECTED]> writes:
> Rui Nuno Capela wrote:
>> My eyes can't find anything related, but you know how intuitive these
>> things are ;)
>
> He means when using the SCHED_ISO patch. Then you'd have iso_cpu and
> iso_period, which you have neither of so you are not using SCHED_ISO
Rui Nuno Capela wrote:
Jack O'Quin wrote:
[...] Looking at the graph, it appears that your DSP load is hovering
above 70% most of the time. This happens to be the default threshold
for revoking realtime privileges. Perhaps that is the problem. Try
running it with the threshold set to 90%. (I do
Jack O'Quin wrote:
>
> [...] Looking at the graph, it appears that your DSP load is hovering
> above 70% most of the time. This happens to be the default threshold
> for revoking realtime privileges. Perhaps that is the problem. Try
> running it with the threshold set to 90%. (I don't recall ex
* Con Kolivas <[EMAIL PROTECTED]> wrote:
> In terms of recommendation, the latency of non-preemptible codepaths
> will be fastest in ext3 in 2.6 due to the nature of it constantly
> being examined, addressed and updated. That does not mean it has the
> fastest performance by any stretch of the im
Peter Chubb <[EMAIL PROTECTED]> writes:
>> "Jack" == Jack O'Quin <[EMAIL PROTECTED]> writes:
>
> Jack> Looks like we need to do another study to determine which
> Jack> filesystem works best for multi-track audio recording and
> Jack> playback. XFS looks promising, but only if they get the la
[EMAIL PROTECTED] wrote:
On Thu, Jan 20, 2005 at 10:42:24AM -0500, Paul Davis wrote:
over on #ardour last week, we saw appalling performance from
reiserfs. a 120GB filesystem with 11GB of space failed to be able to
deliver enough read/write speed to keep up with a 16 track
session. When the filesys
> "Jack" == Jack O'Quin <[EMAIL PROTECTED]> writes:
Jack> Looks like we need to do another study to determine which
Jack> filesystem works best for multi-track audio recording and
Jack> playback. XFS looks promising, but only if they get the latency
Jack> right. Any experience with that?
Alexander Nyberg wrote:
My simple yield DoS don't work anymore. But i found another way.
Running this as SCHED_ISO:
Yep, bad accounting in queue_iso() which relied on p->array == rq->active
This fixes it:
Index: vanilla/kernel/sched.c
===
"Rui Nuno Capela" <[EMAIL PROTECTED]> writes:
> OK. Here goes my fresh and newly jack_test4.1 test suite. It might be
> still rough, as usual ;)
Thanks for all your work on this fine test suite.
> This phenomenon, so to speak, shows up as a sudden full increase of
> DSP/CPU load after a few minu
On Thu, 2005-01-20 at 12:49 -0500, [EMAIL PROTECTED] wrote:
> It's been a long while since I followed ReiserFS development closely,
> *however*, this issue used to be a common problem ReiserFS - when
> free space starts to drop below 10%, performace takes a big hit. So
> performance improved when
> My simple yield DoS don't work anymore. But i found another way.
> Running this as SCHED_ISO:
Yep, bad accounting in queue_iso() which relied on p->array == rq->active
This fixes it:
Index: vanilla/kernel/sched.c
===
--- vanilla.o
On Thu, Jan 20, 2005 at 10:42:24AM -0500, Paul Davis wrote:
> over on #ardour last week, we saw appalling performance from
> reiserfs. a 120GB filesystem with 11GB of space failed to be able to
> deliver enough read/write speed to keep up with a 16 track
> session. When the filesystem was cleared t
just finished a short testrun with nice--20 compared to SCHED_FIFO, on a
relatively slow 466 MHz box:
SCHED_FIFO:
* SUMMARY RESULT
Total seconds ran . . . . . . : 120
Number of clients . . . . . . : 4
Ports per client . . . . . . : 4
*
OK. Here goes my fresh and newly jack_test4.1 test suite. It might be
still rough, as usual ;)
(Jack: this post is an new edited version of the same I sent you last
weekend; sorry for the noise:)
The main difference against jack_test3.2 goes into the specific test client
(jack_test4_client.c). T
Paul Davis <[EMAIL PROTECTED]> writes:
>>That's discouraging about reiserfs. Is it version 3 or 4? Earlier
>>versions showed good realtime responsiveness for audio testers. It
>>had a reputation for working much better at lower latency than ext3.
>
> over on #ardour last week, we saw appalling
>That's discouraging about reiserfs. Is it version 3 or 4? Earlier
>versions showed good realtime responsiveness for audio testers. It
>had a reputation for working much better at lower latency than ext3.
over on #ardour last week, we saw appalling performance from
reiserfs. a 120GB filesystem
Con Kolivas <[EMAIL PROTECTED]> writes:
>> Jack O'Quin wrote:
>>> You're really getting hammered with those periodic 6 msec delays,
>>> though. The basic audio cycle is only 1.45 msec.
> Con Kolivas wrote:
>> As you've already pointed out, though, they occur even with
>> SCHED_FIFO so I'm certai
>> Jack O'Quin wrote:
>>> Outstanding. How do you get rid of that checkerboard grey
>>> background in the graphs?
>>
>>> Con Kolivas <[EMAIL PROTECTED]> writes:
>> Funny; that's the script you sent me so... beats me?
>
> It's just one of the many things I don't understand about graphics.
>
> If I
Con Kolivas wrote:
Jack O'Quin wrote:
I was misreading the x-axis. They're actually every 20 sec. My
system isn't doing that.
Possibly reiserfs journal related. That has larger non-preemptible code
sections.
You're really getting hammered with those periodic 6 msec delays,
though. The basic a
Jack O'Quin wrote:
If I look at those png's locally (with gimp or gqview) they have a
dark grey checkerboard background. If I look at them on the web (with
galeon), the background is white. Go figure. Maybe the file has no
background? I dunno.
Yes there's no background so it depends on what you
utz lehmann wrote:
I had experimented with throttling runaway RT tasks. I use a similar
accounting. I saw a difference between counting with or without the
calling from fork. If i remember correctly the timeout expired too fast
if the non-RT load was "while /bin/true; do :; done".
With "while true;
On Wednesday 19 January 2005 23:57, Jack O'Quin wrote:
>Con Kolivas <[EMAIL PROTECTED]> writes:
>> Jack O'Quin wrote:
>>> Outstanding. How do you get rid of that checkerboard grey
>>> background in the graphs?
>>>
>>> Con Kolivas <[EMAIL PROTECTED]> writes:
>>
>> Funny; that's the script you sent
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
>> Outstanding. How do you get rid of that checkerboard grey
>> background in the graphs?
>
>> Con Kolivas <[EMAIL PROTECTED]> writes:
> Funny; that's the script you sent me so... beats me?
It's just one of the many things I don't unde
On Thu, 2005-01-20 at 11:33 +1100, Con Kolivas wrote:
> utz lehmann wrote:
> > @@ -2406,6 +2489,10 @@ void scheduler_tick(void)
> > task_t *p = current;
> >
> > rq->timestamp_last_tick = sched_clock();
> > + if (iso_task(p) && !rq->iso_refractory)
> > + inc_iso_ticks(rq, p);
>
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Does it degrade significantly with a compile running in the background?
Check results below.
Full results and pretty pictures available here:
http://ck.kolivas.org/patches/SCHED_ISO/iso2-benchmarks/
More pretty pictures with compile load o
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
>> Excellent. Judging by the DSP Load, your machine seems to run almost
>> twice as fast as my 1.5GHz Athlon (surprising). You might want to try
>
> Not really surprising; the 2Mb cache makes this a damn fine cpu, if
> not necessarily
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Con Kolivas wrote:
Here are my results with SCHED_ISO v2 on a pentium-M 1.7Ghz (all
powersaving features off):
Increasing iso_cpu did not change the results.
At least in my testing on my hardware, v2 is working as advertised. I
need result
Con Kolivas <[EMAIL PROTECTED]> writes:
> Con Kolivas wrote:
>
> Here are my results with SCHED_ISO v2 on a pentium-M 1.7Ghz (all
> powersaving features off):
>
> Increasing iso_cpu did not change the results.
>
> At least in my testing on my hardware, v2 is working as advertised. I
> need results
Con Kolivas wrote:
This is version 2 of the SCHED_ISO patch with the yield bug fixed and
code cleanups.
...answering on this thread to consolidate the two branches of the email
thread.
Here are my results with SCHED_ISO v2 on a pentium-M 1.7Ghz (all
powersaving features off):
SCHED_NORMAL:
awk
utz lehmann wrote:
@@ -2406,6 +2489,10 @@ void scheduler_tick(void)
task_t *p = current;
rq->timestamp_last_tick = sched_clock();
+ if (iso_task(p) && !rq->iso_refractory)
+ inc_iso_ticks(rq, p);
+ else
+ dec_iso_ticks(rq, p);
scheduler_tick() is not only called by the timer interrupt but
Hi Con
On Thu, 2005-01-20 at 09:39 +1100, Con Kolivas wrote:
> This is version 2 of the SCHED_ISO patch with the yield bug fixed and
> code cleanups.
Thanks for the update.
@@ -2406,6 +2489,10 @@ void scheduler_tick(void)
task_t *p = current;
rq->timestamp_last_tick = sched_c
This is version 2 of the SCHED_ISO patch with the yield bug fixed and
code cleanups.
This patch for 2.6.11-rc1 provides a method of providing real time
scheduling to unprivileged users which increasingly is desired for
multimedia workloads.
It does this by adding a new scheduling class called SCH
90 matches
Mail list logo