On Thursday 26 July 2001 19:08, Dan Morrill wrote:
> On Wednesday 25 July 2001 11:30 am, David Olofson wrote:
[...]
> > Also make sure it's not possible to get trapped in a situation where
> > one late response delays all responses following it in the queue. If
> > the responses depend on various other threads and/or may take "long"
> > to generate, it might be necessary to run the response handling in
> > parallel
>
> Fortunately, our algorithms are fairly small. Generally they just
> involve performing some check or numeric computation and making an
> appropriate reaction. (Usually a response message back out over
> ethernet.) However, these algorithms may involve FP computations,
> which introduces some obvious problems.
Some extra overhead would be the problem... If that turns out to be a
problem, you could try gathering all FP functions in one thread, to avoid
saving/restoring the FPU context every time one of those functions is to
be executed.
> Our algorithms are small,
> but are they small enough? I'm thinking that's what's going to make or
> break us.
I think your problem is more related to context switching overhead,
interaction with ethernet hardware and that kind of stuff, than it is to
throughput. Some 60 k "events" per second gives you loads of CPU cycles
to play with (*) - but context switching and even function call overhead
can quickly become a major performance killer in this extreme situations.
(*) Compare that to an audio workstation, which has to run up to 96k
samples/s through a complex network of dynamically loaded plugins.
The samples are normally processed in blocks of at least 32 samples,
but function calls and other plugin API overhead (multiplies with
# of plugins running) is still burning a significant amount of CPU
power.
> At any rate, I didn't hear anything like "57,600 messages a second?!
> Sucks to be you." That's encouraging. :)
Well, I hope you have serious hardware! :-)
Poor NICs (lots of port accesses per buffer) will kill your CPU right
away, but considering the relatively relaxed response time requirements,
I think you should get away with any decent NIC. One IRQ per packet would
mean quite some stress, but even that should be possible to deal with on
a hot PC. (I'd assume that any 100+ Mbit NIC has a sane upper limit on
the IRQ rate...)
If the NIC does turn out to generate too many IRQs (ie firing one off for
every packet), you could try disabling the IRQ and polling from an RTL
periodic thread instead - if the card actually works without RTL, that
shouldn't be a problem, as the data will just be queued up and processed
in a more "batch like" manner.
As to actual figures; I managed to scare an old dual P-II 233 enough to
run periodic RTL tasks at some 80kHz with a few port accesses for each
IRQ... Anything with full speed cache (Celeron or FCPGA P-III) should do
better.
//David Olofson --- Programmer, Reologica Instruments AB
.- M A I A -------------------------------------------------.
| Multimedia Application Integration Architecture |
| A Free/Open Source Plugin API for Professional Multimedia |
`----------------------> http://www.linuxaudiodev.com/maia -'
.- David Olofson -------------------------------------------.
| Audio Hacker - Open Source Advocate - Singer - Songwriter |
`--------------------------------------> [EMAIL PROTECTED] -'
----- End of forwarded message from [EMAIL PROTECTED] -----
-- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
--
For more information on Real-Time Linux see:
http://www.rtlinux.org/