Hi all,

I consider to use lwip in an Real Time Vision application running on specially 
designed SoC platform.
Considering specifications and requirements this SoC is more of "embedded 
computer" rather an "embedded controller" or a "device"
It has 256mb of RAM, 4 MIPSes, flash storage, 1Gbit Ethernet adapter etc. On 
the other hand, it is still an "embedded" system, withcertain compromises and 
not as fast as modern computers.

The desired throughput of UDP send is at least 80Mbyte/s, i.e. at least 80% of 
theoretical 1Gbit/s limit.
Considering it is still an embedded systems and having read a good reviews of 
lwip i decided to benchmark the stack.

I started from lwip port for Windows(i could also go for Ubuntu, but preferred 
Windows for the start), since
a) I still don't have the dedicated SoC in hand, it's yet in a design and 
manufacturing stage.
b) I wanted to start a benchmarking within a "trusted" environment, where an 
achieving of 80Mbyte/s over UDP shall be possible.
Without having a SoC, i didn't want to spend the time on target hardware and 
software optimization, just to benchmark the logic of the stack itself and see 
ifit can come close to desired throughput
c) Our SoC is not "that" memory/CPU limited

I was using netconn API, rather then raw API or a sockets API. According to the 
documentation it should provide some convenience over raw API, without a 
redundant copying of sockets API. I might been using raw API as well.
Now for my really a basic test.I tried to send about 100MBytes of data using 
lwip netconn API on my PC. Just a simple loop calling to netconn_send.The data 
of each UDP packet was 1Kbytes. (i.e. no IP fragmentation). The time it took to 
send 100Mbytes was... 25 seconds, i.e. about 4Mbytes per second.It is REALLY 
slow...
I was blaming my Windows machine (not the fastest, other processes running, 
probably even antivirus or firewall).Anyway, i compiled and ran similar 
application using WinSock. The time it took to send 100Mbytes in 1Kbytes of 
data was... 2.5 seconds, i.e. about 40Mbytes per second.Still not the best, but 
much more understandable and acceptable as a first result.
I repeated the tests several times. While the exact results were varying, the 
magnitude remains.
Obviously, it raise the questions1. Did anyone try to work with lwip on 
linux/Win32 and what throughput you've been able to achieve ?2. What are the 
options to optimize the stack in terms of lwip options(lwipopt.h/compiler) and 
further inside the code optimizations ? I am connecting point-to-point.3. It 
seems like the lwip is mostly concerning with the memory consumption rather 
then the achieving a maximal throughput (probably aiming at pure embedded 
systems short of memory and weak CPUs that are not going to communicate at 
speed around 0.8-1Gbit anyway). What is your maximal bandwidth achieved on what 
hardware ?4. What other, open source hardware-independent oriented stacks would 
you suggest if not lwip ?

Some technical information.I've been connecting point to point between 2 1Gbit 
interfaces.The tested machine (sending) was running Win7/32. It has Intel 
Core2Duo CPU with 2gb of RAM and 200gb of HDThe received machine was running 
Linux(Ubuntu). It has Intel Core2Duo CPU with 4gb of RAM and 250gb of HD. It 
was running Ethereal.
The code was compiled using Visual Studio 2008, in release mode.The lwiopt.h 
was unmodified by me.The lwip itself as well as its win32 port (contrib) both 
are 1.4.0This win32 port is using wincap library 


Thank you,Stas



                                          
_______________________________________________
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to