Jack O'Quin <[EMAIL PROTECTED]> writes:

> *** Terminated Sat Jan 15 18:15:13 CST 2005 ***
> ************* SUMMARY RESULT ****************
> Total seconds ran . . . . . . :   300
> Number of clients . . . . . . :    20
> Ports per client  . . . . . . :     4
> Frames per buffer . . . . . . :    64
> *********************************************
> Timeout Count . . . . . . . . :(    1)
> XRUN Count  . . . . . . . . . :    47
> Delay Count (>spare time) . . :     0
> Delay Count (>1000 usecs) . . :     0
> Delay Maximum . . . . . . . . : 500544   usecs
> Cycle Maximum . . . . . . . . :  1086   usecs
> Average DSP Load. . . . . . . :    36.1 %
> Average CPU System Load . . . :     8.2 %
> Average CPU User Load . . . . :    26.3 %
> Average CPU Nice Load . . . . :     0.0 %
> Average CPU I/O Wait Load . . :     0.4 %
> Average CPU IRQ Load  . . . . :     0.7 %
> Average CPU Soft-IRQ Load . . :     0.0 %
> Average Interrupt Rate  . . . :  1703.3 /sec
> Average Context-Switch Rate . : 11600.6 /sec
> *********************************************
>
> I think this means the starvation test was not the problem.  So far,
> I've seen no proof that there is any problem with the 2.6.10
> scheduler, just some evidence that nice --20 does not work for
> multi-threaded realtime audio.
>
> If someone can suggest a way to run certain threads of a process with
> a different nice value than the others, I can probably hack that into
> JACK in some crude way.  That should tell us whether my intuition is
> right about the source of scheduling interference.  
>
> Otherwise, I'm out of ideas at the moment.  I don't think SCHED_RR
> will be any different from SCHED_FIFO in this test.  Even if it were,
> I'm not sure what that would prove.

Studying the test script, I discovered that it starts a separate
program running in the background.  So, I hacked the script to run it
with nice -15 in order not to interfere with the realtime threads.
The XRUNS didn't get much better, but the maximum delay went way down,
from 1/2 sec to a much more believable (but still too high) 32.5 msec.
I ran this with the same patched scheduler.

*** Terminated Sat Jan 15 21:22:00 CST 2005 ***
************* SUMMARY RESULT ****************
Total seconds ran . . . . . . :   300
Number of clients . . . . . . :    20
Ports per client  . . . . . . :     4
Frames per buffer . . . . . . :    64
*********************************************
Timeout Count . . . . . . . . :(    0)
XRUN Count  . . . . . . . . . :    43
Delay Count (>spare time) . . :     0
Delay Count (>1000 usecs) . . :     0
Delay Maximum . . . . . . . . : 32518   usecs
Cycle Maximum . . . . . . . . :   820   usecs
Average DSP Load. . . . . . . :    34.9 %
Average CPU System Load . . . :     8.5 %
Average CPU User Load . . . . :    23.8 %
Average CPU Nice Load . . . . :     0.0 %
Average CPU I/O Wait Load . . :     0.0 %
Average CPU IRQ Load  . . . . :     0.7 %
Average CPU Soft-IRQ Load . . :     0.0 %
Average Interrupt Rate  . . . :  1688.5 /sec
Average Context-Switch Rate . : 11704.9 /sec
*********************************************

This supports my intuition that lack of per-thread granularity is the
main problem.  Where I was able to isolate some non-realtime code and
run it at lower priority, it helped quite a bit.
-- 
  joq
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to