Follow up on my response, see below

On Nov 08, 2016, at 05:58 PM, Joel Cunningham <joel.cunning...@me.com> wrote:


Simon,


On Nov 08, 2016, at 02:54 AM, Simon Goldschmidt <goldsi...@gmx.de> wrote:


Hi Oleg,
 
first of all, I think you are "misusing" TCP as a queue here (and at least our 
implementation is not really meant for this). It might work (or not, as you see), but in 
my opinion, the various timeouts implemented by various stacks impose the risk that your 
setup won't work if you change the client's system (e.g. update windows).
 
If I read correctly, by now you reported 2 possible issues:
1) segment is accepted but old window size is sent
 
I'm not sure what's best here. Of course, we must prevent silly window updates 
during normal operation. In your case, it would probably have been OK to send 
the actual/real window size, but we would haveto find a way to decide when it's 
OK and when not...


As we saw in https://savannah.nongnu.org/bugs/?49128 (and I've also seen with 
Windows 7 as a receiver) a stack can ACK (increase ACK by 1) zero window probes 
which contain 1 byte from the next unsent segment, after the window is closed. 
In the case I've seen with Windows 7, the reported window size in the ACK is 
still 0.


We could separate the silly window avoidance from the update threshold setting 
because it should be safe to report the window once 1 MSS is available regardless 
of if the update threshold is 1/4 of the window.  This issue would still exist for 
when the application has read < 1 MSS of data though.  My understanding of the 
update threshold is that it reduces the number of window updates by giving a 
chance to combine the window update with an outgoing data segment/delayed ACK



From TCP/IP Illustraved Vol: 1, section 22.3:



The receiver must not advertise small windows. The normal algorithm is for the 
receiver not to advertise
a larger window than it is currently advertising (which can be 0) until the 
window can be increased by
either one full-sized segment (i.e„ the MSS being received) or by one-half the 
receiver's buffer space,
whichever is smaller


Just below this section in TCP/IP Illustrated, it gives an excellent example 
that walks through the exact same issue we are discussing:


When the persist timer expires, 1 byte of data is sent (segment 6). The 
receiving application has read 256 bytes
from the receive buffer (at time 3.99), so the byte is accepted and 
acknowledged (segment 7). But the
advertised window is still 0, since the receiver does not have room for either 
one full-sized segment or one-half
of its buffer. This is silly window avoidance by the receiver.



The sender's persist timer is reset and goes off again 5 seconds later (at time 
10.151). One byte is again sent and
acknowledged (segments 8 and 9). Again the amount of room in the receiver's 
buffer (1022 bytes) forces it to
advertise a window of 0.



When the sender's persist timer expires next, at time 15.151, another byte is 
sent and acknowledged (segments
10 and 11). This time the receiver has 1533 bytes available in its buffer, so a 
nonzero window is advertised. The
sender immediately takes advantage of the window and sends 1024 bytes (segment 
12). The acknowledgment
of these 1024 bytes (segment 13) advertises a window of 509 bytes. This appears 
to contradict what we've seen
earlier with small window advertisements.


So LwIP is  behaving correctly for when the window is < 1 MSS.  For wnd > 1 MSS 
(regardless of update threshold) we should be using the current window value in the 
ACK





 
2) after storing a segment in "refused_data", no more ACKs are sent
 
The whole purpose of "refused_data" was to let the stack behave like a buffer 
overflowed: if your device cannot handle incoming data in the speed it is sent by the 
remote host, the remote host should throttle its sender. This is achieved by not 
handling/not answering a packet at all, just like it was dropped due to congestion. This 
should bring the remote host's TCP to send less. ACKing an old seqno instead might work 
for you, but I don't know what will be the result for all remote stacks, so I'm very 
reluctant to change this...
 
As you can see from this, TCP is meant to achieve the highest possible 
throughput possible for the combination of remote host, network and local host. 
What you want instead to make it a queue that keeps up a connection as long as 
possible without data being exchanged. I'm not fully convinced one can coexist 
with the other, but please come up with suggestions of how to fix this ;-)
 
 


Is the intent that an application would use the refused_data feature as part of 
it's normal workflow?  Or is it expected that once this condition happens, the 
developer becomes aware of it and either increases resources in the mbox 
receive buffer implementation (to match the configured window size) or reduce 
the configured window size since the system can't handle the data segment 
pattern?

 
Joel

_______________________________________________
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users
_______________________________________________
lwip-users mailing list
lwip-users@nongnu.org
https://lists.nongnu.org/mailman/listinfo/lwip-users

Reply via email to