Am 29.03.2011 12:53, schrieb Vinzenz Bargsten:
Am 28.03.2011 13:17, schrieb Gilles Chanteperdrix:
Vinzenz Bargsten wrote:
What do you recommend to resolve these issues?
- Does Xenomai's tests work flawlessly on your box? Check e.g. latency.
As far as I can interpret it, the latency test is successful.
Considered SMI problems as cause, but I do not find any indications 
(Xenomais SMI detection is enabled and I do not see any messages).
If you have an SMI issue, you should see high latencies. If you do not
see any message, it means that your chipset is not among the ones which
the disabling code supports. See:
http://www.xenomai.org/index.php/Configuring_x86_kernels#In_case_of_high_latencies
Thanks for pointing that out.

Indeed, there is a LPC device, but I do not see high latencies:

-------------------------------------------------------------------------------------------------------------
00:1f.0 ISA bridge: Intel Corporation 5 Series Chipset LPC Interface Controller (rev 06)
    Subsystem: Dell Device 0427
    Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
    Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
    Latency: 0
    Capabilities: [e0] Vendor Specific Information <?>
    Kernel modules: iTCO_wdt
-------------------------------------------------------------------------------------------------------------

Here is some output of latency, which ran for about 1h with dd running in background:
-------------------------------------------------------------------------------------------------------------
RTD|     -2.024|     -1.950|     -0.098|       0|     0|     -2.422|      9.156
RTD|     -2.023|     -1.947|     -0.102|       0|     0|     -2.422|      9.156
RTD|     -2.027|     -1.951|     -0.120|       0|     0|     -2.422|      9.156
RTD|     -2.024|     -1.945|     -0.057|       0|     0|     -2.422|      9.156
RTT|  01:11:25  (periodic user-mode task, 100 us period, priority 99)
RTH|----lat min|----lat avg|----lat max|-overrun|---msw|---lat best|--lat worst
RTD|     -2.389|     -1.951|      0.024|       0|     0|     -2.422|      9.156
RTD|     -2.026|     -1.938|      0.001|       0|     0|     -2.422|      9.156
---------------------------------------------------------------------------------------------------------


I also attached output of lspci, dmesg, interrupts.

To proove that the problem is on the RTNet/PC side, I used a non-rt Linux remote machine. It executes a simple c program, that sends reference data (XML) via UDP at a given interval, i.e. it simulates the desired communication with the previous windows remote machine (much faster if desired). The problem persists, i.e. no more data received / buffers full. The receiving rt program needs about less than 700µs for processing and sending an answer, then it receives and processes the next data. I can compile it using the normal non-rt socket functions or the appropriate rt_.. functions.
If I flood it with data (sending interval <1ms, or even <0.7ms), using the rt_.. socket functions, curiously  the sending of the answers fails with error code 105, i.e. send buffer full (I think). The data rate is about <500kbyte/s at gigabit connection speed.  I do not expect all data to be processed, but sending should continue working (it does with the non-rt socket functions). Maybe, as mentioned by Jan, the interface card's IRQs are still not served correctly or similarly, such that its buffers run full. Is a programming error concerning the sockets able to cause such problems? 
I appreciate any further advice.


Kind Regards,
Vinzenz



------------------------------------------------------------------------------
Xperia(TM) PLAY
It's a major breakthrough. An authentic gaming
smartphone on the nation's most reliable network.
And it wants your games.
http://p.sf.net/sfu/verizon-sfdev
_______________________________________________
RTnet-users mailing list
RTnet-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/rtnet-users

Reply via email to