On Jun 23, 2014, at 6:32 AM, Scheffenegger, Richard <r...@netapp.com> wrote:

> <as individual>
> 
> Hi Fred,
> 
> thank you for writing this down; one aspect that gets referred to, but not 
> made completely explicit in sections 3.2 and 3.3 is the interaction of the 
> AQM / Queue signals with the transport control loop.
> 
> IMHO, it should be made very clear, when the AQM action is done before the 
> queueing, that the AQM signal is delayed for the outer control loop; 
> obviously in a defensive loss situation, this will always be the case. In 
> comparison, when the Queue prepends the AQM action, the AQM signal is delayed 
> less to the outer control loop.
> 
> Depending on the depth of the queue / departure rate, that timing difference 
> can be significant...
> 
> I don't know how to put that into better words that would fit into your draft 
> though.
> 
> Best regards,
> 
> Richard Scheffenegger

I can go into that if you want.

My logic here goes something like this.

You can think of a TCP session as a control loop stuck into the middle of a 
larger stream. Imagine I’m moving a gigabyte file from here to there, the MSS 
is 1440 bytes (an IPv6 packet containing it is 1500 bytes), the bottleneck link 
between “here” and “there” is some specific rate, and the propagation delay 
between “here” and “there” is some non-trivial value. The least effective 
window that would maximize throughput is the number of segments that could 
fully use the bottleneck capacity; any additional quantum that is there sits in 
a queue and increases the RTT. So at any given point in time, we can think of 
the transfer as having several components:

     K segments that are actually “in flight” somewhere
     An additional K segments that hack been received and whose acks are in 
flight.
     cwnd-2*K segments sitting in a queue, probably at the bottleneck.
     Some number of bytes that haven’t been transmitted yet
          of those, the next cwnd segments will be transmitted as acks arrive.
     Some number of bytes that have already been received at the far end but 
not delivered yet to the application
     Zero or more segments that have been received out of order and are being 
held pending retransmission

Now, let’s mark a specific segment; I’ll call it segment N. I don’t really care 
what the value of N is. But it is the segment that the AQM algorithm will 
select and drop, by whatever algorithm it decides. In a tail-drop case, for the 
sake of argument, we can assert that the entire cwnd segments are between 
segment N and the receiver or are represented in acknowledgments on their way 
back. In a head-drop case, there are cwnd-2K segments sitting in the queue 
after segment N, K segments between it and the receiver, and K acks in flight.

  +------+      +--------+                      +--------+
  |      |      | Queue  |Data    K segments    |        |
  |      +----->+        +----> - - - - ------->+        |
  |Sender|      |Router 1|                      |Receiver|
  |      +<-----+        +<---- - - - - <-------+        |
  |      |      |        |Acks    K acks        |        |
  +------+      +--------+                      +--------+

So now, the whole thing rotates clockwise. 
1) K segments already in flight arrive and are acknowledged while K acks arrive 
at the sender and trigger new transmissions. The queue still has cwnd-2*K 
segments in it.
2) depending on whether it was head drop or tail drop, somewhere between zero 
and cwnd-2*K segments arrive and are acknowledged while as many acks are 
received at the sender and trigger new transmissions. The queue still has 
cwnd-2*K segments in it.
3) the missed packet is detected by the receiver, who starts responding with 
duplicate acks. However, there are still K acks in flight, so the sender is 
going to send another K new packets before he even sees the first dupack. The 
queue still has cwnd-2*K segments in it.
4) we now get a long stream of dupacks, and the sender presumably retransmits 
the dropped packet. AT THIS POINT, SENDER REDUCES CWND. If more than one packet 
got dropped, let’s hope the SACK logic retransmits it as well.
5) At long last, the retransmission arrives at the receiver, who sends a giant 
ack and starts sending a stream of acks, triggering new transmissions.

To determine the difference between head-drop and tail-drop, we have to ask 
ourselves how big cwnd-2*K is. If we are using traditional 
too-full-drop-something without AQM, it might be a largish number (it could be 
the size of the memory allocated to the queue if there is no competing 
traffic), and it is probably fair to say that head-drop would get the event 
back to the sender more rapidly than tail-drop.

If we are using any AQM technology - RED, WRED, ARED, PIE, CoDel, Blue, AVQ, or 
whatever else, the fundamental purpose of the logic is to keep the queue 
relatively shallow, even if it ha a large memory system behind it. In RED 
terms, we would estimate that the mean queue depth will approximate 
min-threshold or less; each of the other technologies has its counterpart to 
that and will similarly keep the latency and/or queue depth down.

Hence, cwnd-2*K is a function of the mean queue depth at the bottleneck, and is 
a relatively small number. Hence, if we’re using an AQM algorithm - any AQM 
algorithm as long as it works - the real differences between had-drop and 
tail-drop is a relatively small number.

Which makes me ask - what are we really talking about? Does it actually matter?

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to