[lwip-users] Netconn vs. raw API performance

2016-09-19 Thread TJO
Hi All, 

I'm currently testing our hardware with lwip 1.4.1, and I see a big
performance difference between using Netconn API and the RAW api.
For baseline I have used the LPCopen sample 'TCP echo Standalone' and 'TCP
echo FreeRTOS' demo's.

I have re-written the demos to connect to a server application on the
laptop, The server application sends data with different package sizes and
excpect the same data returned. The roundtrip is measured.

The stack, in two samples, are setup as similar as possible in the lwipopt.h
file. The target hardware is the same, just as the network and test PC.
Nagle is dislabled.
Also the low-level EMAC driver for the MCU are the same.

With FreeRTOS and Netconn, the trohugput is poor with larger packages (>128
bytes). Sometimes the roundtrip for a package is mores than 2 seconds (It
seems yo be in the netconn_write). Sometime it stalls, when the server
application sends packages to fast after each other.
With the RAW api standalone demo there are no problems at all. Good through
put. 
I than tried to run the RAW TCP demo in FreeRTOS enviorenment. No problems
here.

Because the RAW TCP demo work fine, I assume the low level EMAC driver for
the LPC17xx are working ok.
Because the RAW TCP sample works fine wtih FreeRTOS I assume that works fine
too.

Any other Idea's why this? Or things I could try?

Somehow the combination of LPC17xx and Netconn seems to be a bad choice.!??
(Using google I can see Im not the only one strugling with lwip and LPC17xx)

I will try to updgrade my test sample with the latest lwip 2.0 to see if
that somehow helps.

If not, I think I will revert to using the RAW TCP in my user application
and droppping Netconn API

Thomas





--
View this message in context: 
http://lwip.100.n7.nabble.com/Netconn-vs-raw-API-performance-tp27353.html
Sent from the lwip-users mailing list archive at Nabble.com.

___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users


Re: [lwip-users] long time tcp receive lwip packets lost

2016-09-19 Thread Sergio R. Caprile

> I wrote the bridging part and for testing purposes I didn't implement
> buffer to TCP to serial. So Some while receiving data some data piles 
> up in somewhere in tcp thread (I think).


No, don't "think", debug, do know exactly where you have your problem, 
because it is not lwIP, it is you, and you are the only one who can 
solve it.


> I wrote a python script that sends 2096 characters every second (some
> lorem ipsum text). serial speed is 115200. So there is enough time to
> send data and free memory.

2096 x 10 = 20960 bits/115200 bps = 182ms
Yes, you should have a pulsing output assuming your TCP data has no 
problems in arrival and gets there on time. Have you (for example) moved 
a pin to know when your app runs and what it does ?


Either you are not freeing memory correctly, or you are not assigning 
priorities and run time correctly in your OS, or your port is broken or 
your driver is broken (leaky).


Do yourself some favors and
- Study X.10 (the guys at ITU-T solved the serial packetization issue 
for you some decades ago)
- Check your port and driver by running a known to work application: the 
examples.
- Post the section of your code that interfaces with lwIP. Not my case, 
but you sure will find someone versed in socket API if you label your 
message with the correct subject.
- Give details on your OS configuration for your particular app. Again, 
not my case, but there are some RTOS users lurking here, do use the 
"subject" field wisely.


___
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users