I've been playing around with RTNet and have had some very interesting results.
Essentially, I created a setup using 2 PCs linked by a cross-over cable on a
100 Mbits link. The behavior of the client/server example provided with RTNet
was monitored using the Linux Trace Toolkit. All of it was running on RTAI.

The following is a screen-shot of the echo client sending and receiving a
packet through the net to the echo server:

http://www.opersys.com/LTT/EventGraphRTNetInAction.jpg

You can see the system timer firing off (IRQ 0) and waking up the
client (the client was set up during it's initialization to be called on
once every second). The client sends something on the net and that generates
an IRQ 9 (I think it indicates that the packet was sent successfully). RTAI
then decides to schedule task 0, the Linux kernel, since the kernel timer
function has to be called. This results in "klogd" (PID = 453) to be
rescheduled. klogd calls on the "time" system call but while the kernel is
dealing with this call, another IRQ 9 occurs. This one marks the reception
of the echo from the server. Immediately, RTAI schedules the reception
function of the client which does some processing and yields CPU control.
RTAI then hands the CPU back to Linux which can continue to process klogd's
system call.

That said, apart from the graph being quite helpful in understanding what
is happening, I've also filtered out the occurrences of the netcard's IRQs:
RT-Global IRQ entry     968,993,482,135,788     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,482,135,853     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,483,135,578     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,483,135,643     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,484,135,364     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,484,135,429     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,485,136,148     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,485,136,212     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,486,135,939     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,486,136,004     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,487,135,731     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,487,135,796     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,488,135,522     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,488,135,584     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,489,135,304     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,489,135,368     453     7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,490,135,091     0       7       IRQ : 9, IN-KERNEL
RT-Global IRQ entry     968,993,490,135,153     453     7       IRQ : 9, IN-KERNEL

These should always be considered in couples. The first IRQ marks the sending and
the second one marks the reception. Notice that round-trip communication always
stays between ~63 to 65 microseconds. This is very good.

That said, my setup was:
Server: Laptop (PII 300) with 3com 3c575 PCMCIA card
Client: PII 350 with Linksys LNE100TX card (this actually uses the tulip driver)
Both machines were connected using a crossover on a 100Mbits link.

I had to port the NIC driver for both these cards to RTNET. The 3com wasn't in the
RTNET source code, so there was no choice about this one. The Linksys card which
is supposed to use the Tulip driver didn't work with the tulip_rt in rtnet 0.9.0.
Believe it or not, Linksys actually includes the latest tulip drivers for linux
in source form as part of a disk that comes with the card. Therefore, I converted
that driver to RT. I've discussed this with David Schleef, RTNet's author, and both
drivers will be put into the standard rtnet distro.

The worst part in all this was to get a pcmcia card to work with rtnet. I couldn't
use the pcnet stuff in 0.9.0 since it doesn't compile. So I hacked my way around
and finally got it (my pcmcia NIC) working. Though, the mechanisms of firing up the 
rtnet
on the laptop become somewhat arcane. (kill cardmgr, load cb_enabler, load RTAI stuff,
load rtnet, load rogue RT net card driver, restart cardmgr, rtifconfig it up, etc.)

That said, I'm pretty satisfied with what it does and recommend it for real-time
networking. Don't expect to be doing any TCP with it, but UDP works fine and that's
ok for most RT applications which only need to send commands or retrieve data through
ethernet. Given the deterministic behavior of the software, the only remaining
possible source of disruption is the physical wire. Then again, using the right
type of shielded-wire should yield satisfactory results in most rt-system environments.
If you need to send large amounts of data, I suggest you loop around sending UDP
packets and since you should be alone alone on the wire, you shouldn't need any
retransmits.

David: Thanks for RTNet and keep up the good work.

Cheers

Karim

===================================================
                 Karim Yaghmour
               [EMAIL PROTECTED]
          Operating System Consultant
 (Linux kernel, real-time and distributed systems)
===================================================
-- [rtl] ---
To unsubscribe:
echo "unsubscribe rtl" | mail [EMAIL PROTECTED] OR
echo "unsubscribe rtl <Your_email>" | mail [EMAIL PROTECTED]
---
For more information on Real-Time Linux see:
http://www.rtlinux.org/rtlinux/

Reply via email to