Jack O'Quin wrote:
Werner Almesberger <[EMAIL PROTECTED]> writes:
[ Cc:s trimmed, added abiss-general ]
Con Kolivas wrote:
Possibly reiserfs journal related. That has larger non-preemptible code
sections.
If I understand your workload right, it should consist mainly of
computation, networking
Werner Almesberger <[EMAIL PROTECTED]> writes:
> [ Cc:s trimmed, added abiss-general ]
>
> Con Kolivas wrote:
>> Possibly reiserfs journal related. That has larger non-preemptible code
>> sections.
>
> If I understand your workload right, it should consist mainly of
> computation, networking
[ Cc:s trimmed, added abiss-general ]
Con Kolivas wrote:
> Possibly reiserfs journal related. That has larger non-preemptible code
> sections.
If I understand your workload right, it should consist mainly of
computation, networking (?), and disk reads.
I don't know much about ReiserFS, but in
[ Cc:s trimmed, added abiss-general ]
Con Kolivas wrote:
Possibly reiserfs journal related. That has larger non-preemptible code
sections.
If I understand your workload right, it should consist mainly of
computation, networking (?), and disk reads.
I don't know much about ReiserFS, but in
Werner Almesberger [EMAIL PROTECTED] writes:
[ Cc:s trimmed, added abiss-general ]
Con Kolivas wrote:
Possibly reiserfs journal related. That has larger non-preemptible code
sections.
If I understand your workload right, it should consist mainly of
computation, networking (?), and disk
Jack O'Quin wrote:
Werner Almesberger [EMAIL PROTECTED] writes:
[ Cc:s trimmed, added abiss-general ]
Con Kolivas wrote:
Possibly reiserfs journal related. That has larger non-preemptible code
sections.
If I understand your workload right, it should consist mainly of
computation, networking (?),
Jack O'Quin wrote:
>
> If you grep your log file for 'client failure:', you'll probably find
> that JACK has reacted to the deteriorating situation by shutting down
> some of its clients. The number of 'client failure:' messages is
> *not* the number of clients shut down, there is some repetition
Con Kolivas <[EMAIL PROTECTED]> writes:
> There were numerous bugs in the SCHED_ISO design prior to now, so it
> really was not performing as expected. What is most interesting is
> that the DSP load goes to much higher levels now if xruns are avoided
> and stay at those high levels. If I push
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> I was just pointing out that saying nice(-20) works as well as
> SCHED_ISO, though true, doesn't mean much since neither of them
> (currently) work well enough to be useful.
ok. While i still think nice--20 can be quite good for some purposes, it
will
Con Kolivas wrote:
There were numerous bugs in the SCHED_ISO design prior to now, so it
really was not performing as expected. What is most interesting is that
the DSP load goes to much higher levels now if xruns are avoided and
stay at those high levels. If I push the cpu load too much so that
There were numerous bugs in the SCHED_ISO design prior to now, so it
really was not performing as expected. What is most interesting is that
the DSP load goes to much higher levels now if xruns are avoided and
stay at those high levels. If I push the cpu load too much so that they
get
There were numerous bugs in the SCHED_ISO design prior to now, so it
really was not performing as expected. What is most interesting is that
the DSP load goes to much higher levels now if xruns are avoided and
stay at those high levels. If I push the cpu load too much so that they
get
Con Kolivas wrote:
There were numerous bugs in the SCHED_ISO design prior to now, so it
really was not performing as expected. What is most interesting is that
the DSP load goes to much higher levels now if xruns are avoided and
stay at those high levels. If I push the cpu load too much so that
* Jack O'Quin [EMAIL PROTECTED] wrote:
I was just pointing out that saying nice(-20) works as well as
SCHED_ISO, though true, doesn't mean much since neither of them
(currently) work well enough to be useful.
ok. While i still think nice--20 can be quite good for some purposes, it
will
Con Kolivas [EMAIL PROTECTED] writes:
There were numerous bugs in the SCHED_ISO design prior to now, so it
really was not performing as expected. What is most interesting is
that the DSP load goes to much higher levels now if xruns are avoided
and stay at those high levels. If I push the cpu
Jack O'Quin wrote:
If you grep your log file for 'client failure:', you'll probably find
that JACK has reacted to the deteriorating situation by shutting down
some of its clients. The number of 'client failure:' messages is
*not* the number of clients shut down, there is some repetition (not
Ingo Molnar <[EMAIL PROTECTED]> writes:
> * Jack O'Quin <[EMAIL PROTECTED]> wrote:
>
>> First, only SCHED_FIFO worked reliably in my tests. In Con's tests
>> even that did not work. My system is probably better tuned for low
>> latency than his. Until we can determine why there were so
Con Kolivas wrote:
-cc list trimmed to those who have recently responded.
Here is a patch to go on top of 2.6.11-rc2-mm1 that fixes some bugs in
the general SCHED_ISO code, fixes the priority support between ISO
threads, and implements SCHED_ISO_RR and SCHED_ISO_FIFO as separate
policies. Note
-cc list trimmed to those who have recently responded.
Here is a patch to go on top of 2.6.11-rc2-mm1 that fixes some bugs in
the general SCHED_ISO code, fixes the priority support between ISO
threads, and implements SCHED_ISO_RR and SCHED_ISO_FIFO as separate
policies. Note the bugfixes and
Jack O'Quin wrote:
I still wonder if some coding error might occasionally be letting a
lower priority process continue running after an interrupt when it
ought to be preempted.
Well not surprisingly I did find a bug in my patch which did not honour
priority support between ISO threads. So
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> Has anyone done this kind of realtime testing on an SMP system? I'd
> love to know how they compare. Unfortunately, I don't have access to
> one at the moment. Are they generally better or worse for this kind
> of work? I'm not asking about
* Paolo Ciarrocchi <[EMAIL PROTECTED]> wrote:
> On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar <[EMAIL PROTECTED]> wrote:
> [...]
> > - CKRM is another possibility, and has nonzero costs as well, but solves
> > a wider range of problems.
>
> BTW, do you know what's the status of CKRM ? If I'm
Paolo Ciarrocchi wrote:
On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar <[EMAIL PROTECTED]> wrote:
[...]
- CKRM is another possibility, and has nonzero costs as well, but solves
a wider range of problems.
BTW, do you know what's the status of CKRM ?
If I'm not wrong it is already widely used, is
On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar <[EMAIL PROTECTED]> wrote:
[...]
> - CKRM is another possibility, and has nonzero costs as well, but solves
> a wider range of problems.
BTW, do you know what's the status of CKRM ?
If I'm not wrong it is already widely used, is there any plan to
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> First, only SCHED_FIFO worked reliably in my tests. In Con's tests
> even that did not work. My system is probably better tuned for low
> latency than his. Until we can determine why there were so many
> xruns, it is premature to declare
* Jack O'Quin [EMAIL PROTECTED] wrote:
Has anyone done this kind of realtime testing on an SMP system? I'd
love to know how they compare. Unfortunately, I don't have access to
one at the moment. Are they generally better or worse for this kind
of work? I'm not asking about partitioning
Jack O'Quin wrote:
I still wonder if some coding error might occasionally be letting a
lower priority process continue running after an interrupt when it
ought to be preempted.
Well not surprisingly I did find a bug in my patch which did not honour
priority support between ISO threads. So
-cc list trimmed to those who have recently responded.
Here is a patch to go on top of 2.6.11-rc2-mm1 that fixes some bugs in
the general SCHED_ISO code, fixes the priority support between ISO
threads, and implements SCHED_ISO_RR and SCHED_ISO_FIFO as separate
policies. Note the bugfixes and
Con Kolivas wrote:
-cc list trimmed to those who have recently responded.
Here is a patch to go on top of 2.6.11-rc2-mm1 that fixes some bugs in
the general SCHED_ISO code, fixes the priority support between ISO
threads, and implements SCHED_ISO_RR and SCHED_ISO_FIFO as separate
policies. Note
Ingo Molnar [EMAIL PROTECTED] writes:
* Jack O'Quin [EMAIL PROTECTED] wrote:
First, only SCHED_FIFO worked reliably in my tests. In Con's tests
even that did not work. My system is probably better tuned for low
latency than his. Until we can determine why there were so many
* Jack O'Quin [EMAIL PROTECTED] wrote:
First, only SCHED_FIFO worked reliably in my tests. In Con's tests
even that did not work. My system is probably better tuned for low
latency than his. Until we can determine why there were so many
xruns, it is premature to declare victory
On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar [EMAIL PROTECTED] wrote:
[...]
- CKRM is another possibility, and has nonzero costs as well, but solves
a wider range of problems.
BTW, do you know what's the status of CKRM ?
If I'm not wrong it is already widely used, is there any plan to push
Paolo Ciarrocchi wrote:
On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar [EMAIL PROTECTED] wrote:
[...]
- CKRM is another possibility, and has nonzero costs as well, but solves
a wider range of problems.
BTW, do you know what's the status of CKRM ?
If I'm not wrong it is already widely used, is
* Paolo Ciarrocchi [EMAIL PROTECTED] wrote:
On Mon, 24 Jan 2005 09:59:02 +0100, Ingo Molnar [EMAIL PROTECTED] wrote:
[...]
- CKRM is another possibility, and has nonzero costs as well, but solves
a wider range of problems.
BTW, do you know what's the status of CKRM ? If I'm not wrong
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
>> I'll try building a SCHED_RR version of JACK. I still don't think it
>> will make any difference. But my intuition isn't working very well
>> right now, so I need more data.
>
> Could be that despite what it appears, FIFO behaviour
Ingo Molnar <[EMAIL PROTECTED]> writes:
> just finished a short testrun with nice--20 compared to SCHED_FIFO, on a
> relatively slow 466 MHz box:
Has anyone done this kind of realtime testing on an SMP system? I'd
love to know how they compare. Unfortunately, I don't have access to
one at the
Jack O'Quin wrote:
I'll try building a SCHED_RR version of JACK. I still don't think it
will make any difference. But my intuition isn't working very well
right now, so I need more data.
Could be that despite what it appears, FIFO behaviour may be desirable
to RR. Also the RR in SCHED_ISO is
Jack O'Quin <[EMAIL PROTECTED]> writes:
> Will post the correct numbers shortly. Sorry for the screw-up.
Here they are...
http://www.joq.us/jack/benchmarks/sched-isoprio
http://www.joq.us/jack/benchmarks/sched-isoprio+compile
I moved the previous runs to the sched-fifo* directories where
Jack O'Quin <[EMAIL PROTECTED]> writes:
> These results are indistinguishable from SCHED_FIFO...
Disregard my previous message, it was an idiotic mistake. The results
were indistinguishable form SCHED_FIFO because they *were* SCHED_FIFO.
I'm running everything again, this time with the correct
Con Kolivas <[EMAIL PROTECTED]> writes:
>>>Second the patch I sent you is fine for testing; I was hoping you
>>>would try it. What you can't do with it is spawn lots of userspace
>>>apps safely SCHED_ISO with it - it will crash, but it not take down
>>>your hard disk. I've had significantly
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
There are two things that the SCHED_ISO you tried is not that
SCHED_FIFO is - As you mentioned there is no priority support, and it
is RR, not FIFO. I am not sure whether it is one and or the other
responsible. Both can be added to
Con Kolivas <[EMAIL PROTECTED]> writes:
> There are two things that the SCHED_ISO you tried is not that
> SCHED_FIFO is - As you mentioned there is no priority support, and it
> is RR, not FIFO. I am not sure whether it is one and or the other
> responsible. Both can be added to SCHED_ISO. I
Jack O'Quin wrote:
Looked at this way, there really is no question. The new scheduler
prototypes are falling short significantly. Could this be due to
their lack of priority distinctions between realtime threads? Maybe.
I can't say for sure. I'll be interested to see what happens when Con
is
Ingo Molnar <[EMAIL PROTECTED]> writes:
> thanks for the testing. The important result is that nice--20
> performance is roughly the same as SCHED_ISO. This somewhat
> reduces the urgency of the introduction of SCHED_ISO.
Doing more runs and a more thorough analysis has driven me to a
different
>Yup, modern must be the key. Even Ingo can't help my little ole PIII/500
>with YMF-740C. Dang thing can't handle -p64 (alsa rejects that, causing
>jackd to become terminally upset), and it can't even handle 4 clients at
>SCHED_FIFO despite latest/greatest RT preempt kernel without xruns.
>
Yup, modern must be the key. Even Ingo can't help my little ole PIII/500
with YMF-740C. Dang thing can't handle -p64 (alsa rejects that, causing
jackd to become terminally upset), and it can't even handle 4 clients at
SCHED_FIFO despite latest/greatest RT preempt kernel without xruns.
Ingo Molnar [EMAIL PROTECTED] writes:
thanks for the testing. The important result is that nice--20
performance is roughly the same as SCHED_ISO. This somewhat
reduces the urgency of the introduction of SCHED_ISO.
Doing more runs and a more thorough analysis has driven me to a
different
Jack O'Quin wrote:
Looked at this way, there really is no question. The new scheduler
prototypes are falling short significantly. Could this be due to
their lack of priority distinctions between realtime threads? Maybe.
I can't say for sure. I'll be interested to see what happens when Con
is
Con Kolivas [EMAIL PROTECTED] writes:
There are two things that the SCHED_ISO you tried is not that
SCHED_FIFO is - As you mentioned there is no priority support, and it
is RR, not FIFO. I am not sure whether it is one and or the other
responsible. Both can be added to SCHED_ISO. I haven't
Jack O'Quin wrote:
Con Kolivas [EMAIL PROTECTED] writes:
There are two things that the SCHED_ISO you tried is not that
SCHED_FIFO is - As you mentioned there is no priority support, and it
is RR, not FIFO. I am not sure whether it is one and or the other
responsible. Both can be added to
Con Kolivas [EMAIL PROTECTED] writes:
Second the patch I sent you is fine for testing; I was hoping you
would try it. What you can't do with it is spawn lots of userspace
apps safely SCHED_ISO with it - it will crash, but it not take down
your hard disk. I've had significantly better results with
Jack O'Quin [EMAIL PROTECTED] writes:
These results are indistinguishable from SCHED_FIFO...
Disregard my previous message, it was an idiotic mistake. The results
were indistinguishable form SCHED_FIFO because they *were* SCHED_FIFO.
I'm running everything again, this time with the correct
Jack O'Quin [EMAIL PROTECTED] writes:
Will post the correct numbers shortly. Sorry for the screw-up.
Here they are...
http://www.joq.us/jack/benchmarks/sched-isoprio
http://www.joq.us/jack/benchmarks/sched-isoprio+compile
I moved the previous runs to the sched-fifo* directories where
Jack O'Quin wrote:
I'll try building a SCHED_RR version of JACK. I still don't think it
will make any difference. But my intuition isn't working very well
right now, so I need more data.
Could be that despite what it appears, FIFO behaviour may be desirable
to RR. Also the RR in SCHED_ISO is
Ingo Molnar [EMAIL PROTECTED] writes:
just finished a short testrun with nice--20 compared to SCHED_FIFO, on a
relatively slow 466 MHz box:
Has anyone done this kind of realtime testing on an SMP system? I'd
love to know how they compare. Unfortunately, I don't have access to
one at the
Con Kolivas [EMAIL PROTECTED] writes:
Jack O'Quin wrote:
I'll try building a SCHED_RR version of JACK. I still don't think it
will make any difference. But my intuition isn't working very well
right now, so I need more data.
Could be that despite what it appears, FIFO behaviour may be
At 03:50 PM 1/23/2005 +1100, Con Kolivas wrote:
Looks like the number of steps to convert a modern "standard setup"
desktop to a low latency one on linux aren't that big after all :)
Yup, modern must be the key. Even Ingo can't help my little ole PIII/500
with YMF-740C. Dang thing can't handle
Jack O'Quin wrote:
I'm wondering now if the lack of priority support in the two
prototypes might explain the problems I'm seeing.
Distinctly possible since my results got better with priority support.
However I'm still bugfixing what I've got. Just as a data point here is
an incremental patch
Jack O'Quin <[EMAIL PROTECTED]> writes:
>
> I ran three sets of tests with three or more 5 minute runs for each
> case. The results (log files and graphs) are in these directories...
>
> 1) sched-fifo -- as a baseline
> http://www.joq.us/jack/benchmarks/sched-fifo
>
> 2) sched-iso --
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Jack O'Quin wrote:
[snip lots of valid points]
suggest some things to try. First, make sure the JACK tmp directory
is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
Looks like the tmpfs is probably the biggest problem.
Con Kolivas <[EMAIL PROTECTED]> writes:
> Meanwhile, I have the priority support working (but not bug free), and
> the preliminary results suggest that the results are better. Do I
> recall someone mentioning jackd uses threads at different priority?
Yes, it does.
I'm not sure whether that
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
> [snip lots of valid points]
>> suggest some things to try. First, make sure the JACK tmp directory
>> is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
>
> Looks like the tmpfs is probably the biggest problem.
Jack O'Quin wrote:
[snip lots of valid points]
suggest some things to try. First, make sure the JACK tmp directory
is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
Looks like the tmpfs is probably the biggest problem. Here's SCHED_ISO
with just the /tmp mounted on tmpfs change
Con Kolivas <[EMAIL PROTECTED]> writes:
> Jack O'Quin wrote:
>> Neither run exhibits reliable audio performance. There is some low
>> latency performance problem with your system. Maybe ReiserFS is
>> causing trouble even with logging turned off. Perhaps the problem is
>> somewhere else.
* Nick Piggin ([EMAIL PROTECTED]) wrote:
> Jack O'Quin wrote:
>
> > Chris Wright and Arjan van de Ven have outlined a proposal to address
> > the privilege issue using rlimits. This is still the only workable
> > alternative to the realtime LSM on the table. If the decision were up
> > to me, I
Jack O'Quin wrote:
Chris Wright and Arjan van de Ven have outlined a proposal to address
the privilege issue using rlimits. This is still the only workable
alternative to the realtime LSM on the table. If the decision were up
to me, I would choose the simplicity and better security of the LSM.
Paul Davis wrote:
The idea is to get equivalent performance to SCHED_FIFO. The results
show that much, and it is 100 times better than unprivileged
SCHED_NORMAL. The fact that this is an unoptimised normal desktop
environment means that the conclusion we _can_ draw is that SCHED_ISO is
as good
>The idea is to get equivalent performance to SCHED_FIFO. The results
>show that much, and it is 100 times better than unprivileged
>SCHED_NORMAL. The fact that this is an unoptimised normal desktop
>environment means that the conclusion we _can_ draw is that SCHED_ISO is
>as good as
Jack O'Quin wrote:
Neither run exhibits reliable audio performance. There is some low
latency performance problem with your system. Maybe ReiserFS is
causing trouble even with logging turned off. Perhaps the problem is
somewhere else. Maybe some device is misbehaving.
Until you solve this
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
So let's try again, sorry about the noise:
==> jack_test4-2.6.11-rc1-mm2-fifo.log <==
*
XRUN Count . . . . . . . . . : 3
Delay Maximum . . . . . . . . : 20161 usecs
Ingo Molnar <[EMAIL PROTECTED]> writes:
> thanks for the testing. The important result is that nice--20
> performance is roughly the same as SCHED_ISO. This somewhat
> reduces the urgency of the introduction of SCHED_ISO.
I can see why you feel that way, but don't share your conclusion.
Con Kolivas <[EMAIL PROTECTED]> writes:
> So let's try again, sorry about the noise:
>
> ==> jack_test4-2.6.11-rc1-mm2-fifo.log <==
> *
> XRUN Count . . . . . . . . . : 3
> Delay Maximum . . . . . . . . : 20161 usecs
>
* Jack O'Quin <[EMAIL PROTECTED]> wrote:
> I finally made new kernel builds for the latest patches from both Ingo
> and Con. I kept the two patch sets separate, as they modify some of
> the same files.
>
> I ran three sets of tests with three or more 5 minute runs for each
> case. The results
>> "Jack" == Jack O'Quin <[EMAIL PROTECTED]> writes:
>
>
>Jack> Looks like we need to do another study to determine which
>Jack> filesystem works best for multi-track audio recording and
>Jack> playback. XFS looks promising, but only if they get the latency
>Jack> right. Any experience with
Jack == Jack O'Quin [EMAIL PROTECTED] writes:
Jack Looks like we need to do another study to determine which
Jack filesystem works best for multi-track audio recording and
Jack playback. XFS looks promising, but only if they get the latency
Jack right. Any experience with that?
The nice
* Jack O'Quin [EMAIL PROTECTED] wrote:
I finally made new kernel builds for the latest patches from both Ingo
and Con. I kept the two patch sets separate, as they modify some of
the same files.
I ran three sets of tests with three or more 5 minute runs for each
case. The results (log
Con Kolivas [EMAIL PROTECTED] writes:
So let's try again, sorry about the noise:
== jack_test4-2.6.11-rc1-mm2-fifo.log ==
*
XRUN Count . . . . . . . . . : 3
Delay Maximum . . . . . . . . : 20161 usecs
Ingo Molnar [EMAIL PROTECTED] writes:
thanks for the testing. The important result is that nice--20
performance is roughly the same as SCHED_ISO. This somewhat
reduces the urgency of the introduction of SCHED_ISO.
I can see why you feel that way, but don't share your conclusion.
First,
Jack O'Quin wrote:
Con Kolivas [EMAIL PROTECTED] writes:
So let's try again, sorry about the noise:
== jack_test4-2.6.11-rc1-mm2-fifo.log ==
*
XRUN Count . . . . . . . . . : 3
Delay Maximum . . . . . . . . : 20161 usecs
Jack O'Quin wrote:
Neither run exhibits reliable audio performance. There is some low
latency performance problem with your system. Maybe ReiserFS is
causing trouble even with logging turned off. Perhaps the problem is
somewhere else. Maybe some device is misbehaving.
Until you solve this
The idea is to get equivalent performance to SCHED_FIFO. The results
show that much, and it is 100 times better than unprivileged
SCHED_NORMAL. The fact that this is an unoptimised normal desktop
environment means that the conclusion we _can_ draw is that SCHED_ISO is
as good as SCHED_FIFO for
Paul Davis wrote:
The idea is to get equivalent performance to SCHED_FIFO. The results
show that much, and it is 100 times better than unprivileged
SCHED_NORMAL. The fact that this is an unoptimised normal desktop
environment means that the conclusion we _can_ draw is that SCHED_ISO is
as good
Jack O'Quin wrote:
Chris Wright and Arjan van de Ven have outlined a proposal to address
the privilege issue using rlimits. This is still the only workable
alternative to the realtime LSM on the table. If the decision were up
to me, I would choose the simplicity and better security of the LSM.
* Nick Piggin ([EMAIL PROTECTED]) wrote:
Jack O'Quin wrote:
Chris Wright and Arjan van de Ven have outlined a proposal to address
the privilege issue using rlimits. This is still the only workable
alternative to the realtime LSM on the table. If the decision were up
to me, I would
Con Kolivas [EMAIL PROTECTED] writes:
Jack O'Quin wrote:
Neither run exhibits reliable audio performance. There is some low
latency performance problem with your system. Maybe ReiserFS is
causing trouble even with logging turned off. Perhaps the problem is
somewhere else. Maybe some
Jack O'Quin wrote:
[snip lots of valid points]
suggest some things to try. First, make sure the JACK tmp directory
is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
Looks like the tmpfs is probably the biggest problem. Here's SCHED_ISO
with just the /tmp mounted on tmpfs change
Con Kolivas [EMAIL PROTECTED] writes:
Jack O'Quin wrote:
[snip lots of valid points]
suggest some things to try. First, make sure the JACK tmp directory
is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
Looks like the tmpfs is probably the biggest problem. Here's
Con Kolivas [EMAIL PROTECTED] writes:
Meanwhile, I have the priority support working (but not bug free), and
the preliminary results suggest that the results are better. Do I
recall someone mentioning jackd uses threads at different priority?
Yes, it does.
I'm not sure whether that matters
Jack O'Quin wrote:
Con Kolivas [EMAIL PROTECTED] writes:
Jack O'Quin wrote:
[snip lots of valid points]
suggest some things to try. First, make sure the JACK tmp directory
is mounted on a tmpfs[1]. Then, try the test with ext2, instead of
Looks like the tmpfs is probably the biggest problem.
Jack O'Quin [EMAIL PROTECTED] writes:
I ran three sets of tests with three or more 5 minute runs for each
case. The results (log files and graphs) are in these directories...
1) sched-fifo -- as a baseline
http://www.joq.us/jack/benchmarks/sched-fifo
2) sched-iso -- Con's
Jack O'Quin wrote:
I'm wondering now if the lack of priority support in the two
prototypes might explain the problems I'm seeing.
Distinctly possible since my results got better with priority support.
However I'm still bugfixing what I've got. Just as a data point here is
an incremental patch
At 03:50 PM 1/23/2005 +1100, Con Kolivas wrote:
Looks like the number of steps to convert a modern standard setup
desktop to a low latency one on linux aren't that big after all :)
Yup, modern must be the key. Even Ingo can't help my little ole PIII/500
with YMF-740C. Dang thing can't handle
Con Kolivas wrote:
Con Kolivas wrote:
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the
Con Kolivas wrote:
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the behaviour
under iso
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the behaviour
under iso throttling.
What version
Jack O'Quin wrote:
Con Kolivas <[EMAIL PROTECTED]> writes:
Here's fresh results on more stressed hardware (on ext3) with
2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
hovering at 50% spikes at times close to 70 which tests the behaviour
under iso throttling.
What version
Con Kolivas <[EMAIL PROTECTED]> writes:
> Here's fresh results on more stressed hardware (on ext3) with
> 2.6.11-rc1-mm2 (which by the way has SCHED_ISO v2 included). The load
> hovering at 50% spikes at times close to 70 which tests the behaviour
> under iso throttling.
What version of JACK are
Con Kolivas <[EMAIL PROTECTED]> writes:
> As for priority support, I have been working on it. While the test
> cases I've been involved in show no need for it, I can understand why
> it would be desirable.
Yes. Rui's jack_test3.2 does not require multiple realtime
priorities, but I can point to
utz lehmann wrote:
On Sat, 2005-01-22 at 10:48 +1100, Con Kolivas wrote:
utz lehmann wrote:
Hi
I dislike the behavior of the SCHED_ISO patch that iso tasks are
degraded to SCHED_NORMAL if they exceed the limit.
IMHO it's better to throttle them at the iso_cpu limit.
I have modified Con's iso2
Ingo Molnar <[EMAIL PROTECTED]> writes:
> just finished a short testrun with nice--20 compared to SCHED_FIFO, on a
> relatively slow 466 MHz box:
> this shows the surprising result that putting all RT tasks on nice--20
> reduced context-switch rate by 20% and the Delay Maximum is lower as
>
1 - 100 of 178 matches
Mail list logo