goldsi...@gmx.de wrote
> Which version of lwIP are you using? Do you know that we support TCP 
> window scaling by now (LWIP_WND_SCALE)?

Indeed, i forgot this one. Its the version provided by the STM32CubeMX tool.
Diff shows its identical to LWIP 1.4.1. I didn't knew that. I guess you
refer to patch 7858? I will apply it.


goldsi...@gmx.de wrote
>> - I decreased the TCP timer intervals from 250 ms to 10 ms. A even higher
>> rate tends to produce a lot of retransmissions.
> You should really not need to do this! I rather expect more problems 
> than anything being solved. Especially when your main issue is sending 
> data, not receiving.

I've tested again with 250 ms - The only difference in the behavior seems to
be a much lower transmission rate. I achieve about 200 kB/s.


Krzysztof Wesołowski wrote
> I am not sure why you decided to go in such extreme direction with your
> changes.
> 
> We are almost able to saturate 100MBit connection (>8 MB/s) and upload
> about 2MB/s from SD Card on STM32F407 with RMII attached PHY (Some Micrels
> KSZ...)
> 
> Are you using some WiFi in your setup? With Ethernet networks we only
> needed to tune memory in lwipopts, and there was no need to change types
> and/or polling interval.
> 
> Have you benchmarked if the need for optimization really is within LwIP
> related code?

I've started about a month ago porting our STM32F1 based board to the new
MCU. The old design had a WizNet W5300 Ethernet IC, implementing the TCPIP
Stack in HW. Therefore I'm absolutely not sure, my changes are going in the
right direction. However, initially I struggled on very high roundtrip times
and a low throughput of about 5 kBit/s. The PC seemed to resend
unacknowledged packets after about 200 ms. Also, I'm using the tcp_poll
callback to en-queue new data to the stack, in the context of the
tcp_thread. For both reasons it seems natural to reduce the intervals. On
the other hand, having a bigger SND_WND allows less memory management,
outside of the stack, which seems to be quite efficient. Now i can achieve 2
MBit/s (until the error occurs) so yes it seems to be influenced by the
stack. (5 kB -> 70 kB was achieved, due to the zero copy driver)

On the other hand, I guess there still several other regions for
improvements.

Thanks!



--
View this message in context: 
http://lwip.100.n7.nabble.com/TCP-retransmission-flooding-at-end-of-stream-tp23275p23294.html
Sent from the lwip-users mailing list archive at Nabble.com.

_______________________________________________
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to