Simon,

On Nov 08, 2016, at 02:54 AM, Simon Goldschmidt <goldsi...@gmx.de> wrote:


Hi Oleg,
 
first of all, I think you are "misusing" TCP as a queue here (and at least our 
implementation is not really meant for this). It might work (or not, as you see), but in 
my opinion, the various timeouts implemented by various stacks impose the risk that your 
setup won't work if you change the client's system (e.g. update windows).
 
If I read correctly, by now you reported 2 possible issues:
1) segment is accepted but old window size is sent
 
I'm not sure what's best here. Of course, we must prevent silly window updates 
during normal operation. In your case, it would probably have been OK to send 
the actual/real window size, but we would haveto find a way to decide when it's 
OK and when not...


As we saw in https://savannah.nongnu.org/bugs/?49128 (and I've also seen with 
Windows 7 as a receiver) a stack can ACK (increase ACK by 1) zero window probes 
which contain 1 byte from the next unsent segment, after the window is closed. 
In the case I've seen with Windows 7, the reported window size in the ACK is 
still 0.


We could separate the silly window avoidance from the update threshold setting 
because it should be safe to report the window once 1 MSS is available regardless 
of if the update threshold is 1/4 of the window.  This issue would still exist for 
when the application has read < 1 MSS of data though.  My understanding of the 
update threshold is that it reduces the number of window updates by giving a 
chance to combine the window update with an outgoing data segment/delayed ACK



From TCP/IP Illustraved Vol: 1, section 22.3:



The receiver must not advertise small windows. The normal algorithm is for the 
receiver not to advertise
a larger window than it is currently advertising (which can be 0) until the 
window can be increased by
either one full-sized segment (i.e„ the MSS being received) or by one-half the 
receiver's buffer space,
whichever is smaller


 
2) after storing a segment in "refused_data", no more ACKs are sent
 
The whole purpose of "refused_data" was to let the stack behave like a buffer 
overflowed: if your device cannot handle incoming data in the speed it is sent by the 
remote host, the remote host should throttle its sender. This is achieved by not 
handling/not answering a packet at all, just like it was dropped due to congestion. This 
should bring the remote host's TCP to send less. ACKing an old seqno instead might work 
for you, but I don't know what will be the result for all remote stacks, so I'm very 
reluctant to change this...
 
As you can see from this, TCP is meant to achieve the highest possible 
throughput possible for the combination of remote host, network and local host. 
What you want instead to make it a queue that keeps up a connection as long as 
possible without data being exchanged. I'm not fully convinced one can coexist 
with the other, but please come up with suggestions of how to fix this ;-)
 
 


Is the intent that an application would use the refused_data feature as part of 
it's normal workflow?  Or is it expected that once this condition happens, the 
developer becomes aware of it and either increases resources in the mbox 
receive buffer implementation (to match the configured window size) or reduce 
the configured window size since the system can't handle the data segment 
pattern?

 
Joel
_______________________________________________
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to