Well, this week began very good but is finishing in a very frustrate state.
I have recompiled the kernel with xenomai rc6 and 2.6.23 and rtnet trunk. I see "sometimes" the "Detected Tx Unit Hang" in the dmesg but I was not be able to find when in a deterministic way. I have a incorrect routes table, although the sleep 3 and the driver is waiting 3s to wake up. I cannot use wireshark because it doesn't detect the rtethX device, although I think that I have followed the Readme.rtcap file. I don't know which stupid mistake I'm doing. I cannot receive any response of my robot controller. Ping doesn't receive any response .... I don't have any flight ticket to Mallorca Island with any blond girl with blue eyes ... my wife will kill me ..... I hope next week will be better .... Best regards, Leo ** RTnet 0.9.10 - built on Nov 15 2007 18:01:11 *** RTnet: initialising real-time networking Intel(R) PRO/1000 Network Driver - version 7.1.9 Copyright (c) 1999-2006 Intel Corporation. PCI: Setting latency timer of device 0000:02:00.0 to 64 e1000: 0000:02:00.0: e1000_probe: (PCI Express:2.5Gb/s:Width x1) 00:1b:21:05:0c:fc RTnet: registered rteth0 e1000: rteth0: e1000_probe: Intel(R) PRO/1000 Network Connection PCI: Setting latency timer of device 0000:03:00.0 to 64 e1000: 0000:03:00.0: e1000_probe: (PCI Express:2.5Gb/s:Width x1) 00:1b:21:05:0c:b6 RTnet: registered rteth1 e1000: rteth1: e1000_probe: Intel(R) PRO/1000 Network Connection initializing loopback... RTnet: registered rtlo RTcap: real-time capturing interface RTcap: rtlo busy, skipping device! e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <4> next_to_use <4> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000b40d> next_to_watch <0> jiffies <10000b601> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <4> next_to_use <4> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000ba0a> next_to_watch <0> jiffies <10000bbf1> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <1> next_to_use <1> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000da58> next_to_watch <0> jiffies <10000db45> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000dc4b> next_to_watch <0> jiffies <10000dd4d> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000de3f> next_to_watch <0> jiffies <10000df55> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000e033> next_to_watch <0> jiffies <10000e15d> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000e227> next_to_watch <0> jiffies <10000e365> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000e41b> next_to_watch <0> jiffies <10000e56d> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000e60f> next_to_watch <0> jiffies <10000e775> next_to_watch.status <0> device eth1 entered promiscuous mode audit(1195146172.089:2): dev=eth1 prom=256 old_prom=0 auid=4294967295 device lo entered promiscuous mode audit(1195146172.105:3): dev=lo prom=256 old_prom=0 auid=4294967295 e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000e803> next_to_watch <0> jiffies <10000e97d> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000e9f7> next_to_watch <0> jiffies <10000eb85> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000ebeb> next_to_watch <0> jiffies <10000ed8d> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000eddf> next_to_watch <0> jiffies <10000ef95> next_to_watch.status <0> e1000: rteth0: e1000_clean_tx_irq: Detected Tx Unit Hang Tx Queue <0> TDH <0> TDT <2> next_to_use <2> next_to_clean <0> buffer_info[next_to_clean] time_stamp <10000efd3> next_to_watch <0> jiffies <10000f19d> next_to_watch.status <0> e1000: rteth0: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex e1000: rteth0: e1000_watchdog: NIC Link is Down e1000: rteth0: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex device eth1 left promiscuous mode audit(1195146282.098:4): dev=eth1 prom=0 old_prom=256 auid=4294967295 device lo left promiscuous mode audit(1195146282.130:5): dev=lo prom=0 old_prom=256 auid=4294967295 device eth1 entered promiscuous mode audit(1195146283.610:6): dev=eth1 prom=256 old_prom=0 auid=4294967295 device lo entered promiscuous mode audit(1195146283.626:7): dev=lo prom=256 old_prom=0 auid=4294967295 device eth1 left promiscuous mode audit(1195146285.470:8): dev=eth1 prom=0 old_prom=256 auid=4294967295 device lo left promiscuous mode audit(1195146285.502:9): dev=lo prom=0 old_prom=256 auid=4294967295 e1000: rteth0: e1000_watchdog: NIC Link is Down e1000: rteth0: e1000_watchdog: NIC Link is Up 100 Mbps Full Duplex e1000: rteth0: e1000_watchdog: NIC Link is Down A Dijous 15 Novembre 2007, Karl Reichert va escriure: > Leopold Palomo-Avellaneda wrote: > > > I guess he is talking about the rt_e1000 driver (based on e1000 > > > driver). > > > > yes ... > > > > > > > I compared the e1000 driver in SVN trunk with the one in 0.9.9. > > > > > > > > As the syslog show, I always have worked with 0.9.10 (trunk), :-) > > > > well that what I have seen ... > > > > > > Are you sure? Because the SVN version of RTnet has this rt_e1000 patch > > > already applied and it seems like you don't, so maybe you are not > > > > running > > > > > rtnet SVN version. Better try a diff to be sure. > > > > yes I'm sure. I was working all the day yesterday with the rtnet 0.9.10 > > (trunk). I have looked the e1000/e1000_main.c to ensure that the patch > > was applied. > > > > However I wouldn't lost in this know. Yesterday my box was doing strange > > things. The route table behaviour is very disconcerting. > > > > [....] > > > > > You don't have to understand RTcap. Just enable it at rtnet > > > > configuration > > > > > (make menuconfig) and load its module, enable its interface and let > > > wireshark/tcpdump capture the traffic. The only thing you have to do > > > > then > > > > > is to check the frames (which are sent, are they corrupt ... ) > > > > ok, > > > > rtcap compiled ... > > .... > > driver loaded; > > module rtcap loaded > > and rteth0 enable > > sbin/rtifconfig rteth0 up 10.0.0.1 promisc netmask 255.255.255.0 > > > > but wireshark doesn't see it. > > > > I'm using wireshark 0.99.4, from debian etch > > Does the interface occure in wireshark? You have to capture on interface > "rteth0" or "any". Both should be visible within wireshark. Both should > capture frames (at least synchronisation frames, if you loaded tdma module > and there is an active master). > > > > ... and last but not least ... send me one of those tickets ;) > > > > are you a nice girl with blond hair, blue eyes and intelligent? ;-) > > No, but I could get some :D ------------------------------------------------------------------------- This SF.net email is sponsored by: Splunk Inc. Still grepping through log files to find problems? Stop. Now Search log events and configuration files using AJAX and a browser. Download your FREE copy of Splunk now >> http://get.splunk.com/ _______________________________________________ RTnet-users mailing list [email protected] https://lists.sourceforge.net/lists/listinfo/rtnet-users

