Gerrit,

Gerrit Renker wrote:
| I don't think the RFCs are nearly as ambiguous as you claim. RFC 4342 | explicitly allows interpretations (2) 's' is an average and (3) 's = MSS'. | (1) is not mentioned.
Section 5.3: "CCID 3 is intended for applications that use a fixed packet size, 
and
              that vary their sending rate in packets per second in response to
              congestion."

Yes I know. But the end of that same paragraph says "However, some attention might be required for applications using CCID 3 that vary their packet size not in response to congestion, but in response to other application-level requirements." And that should clearly take precedence.

A clarification to the first sentence might result in a paragraph like the following.

   CCID 3 is intended for applications that vary their sending rate in
   packets per second in response to congestion, rather than varying their
   packet size.  CCID 3 is not appropriate for applications that require a
   fixed interval of time between packets, and that vary their packet size
   instead of their packet rate in response to congestion.  However, some
   attention might be required for applications using CCID 3 that vary
   their packet size not in response to congestion, but in response to
   other application-level requirements.

I'll save this clarification for the future. Do people think this rises to the level of an erratum? I'm skeptical.


| You clearly believe that (3) 's = MSS' is a bad idea. However, the documents | allow it.
I was not stating a personal `belief', these are results published in the 
literature.
That the documents allow it is of no help: as explained, they allow `s' to be 
interpreted
in three entirely different ways. This variability is clearly not implementable.

(1) 's' may be treated in two ways, not three.

(2) The variability is clearly implementable, just as a TCP stack can optionally support SACK.

(3) The document offers the implementer a choice. This is entirely typical of specifications.

(4) The results published in the literature are evocative, and I agree with you that packet size is problematic -- whether moving average or MSS. However, those results do not convince me that CCID 3 with 's=MSS' will be wildly unfair in practice, relative to (for example) TCP itself with small packets. For that reason I see no need to deprecate 's=MSS'. Let us gain implementation experience.

Working group feedback on this issue might be helpful.


| I believe you are asking for (3) 's = MSS' to be officially deprecated. I see | no need for that. In general 's = MSS' is conservative (yes?) so there is no | need to deprecate it.
No I am asking for clearer specification of this issue so that implementers (not
trained researchers) can unambiguously derive an implementation from it.

?? But I keep giving you a clear specification and you disagree with it. The document's own specification still seems relatively clear. Let me try again.

(1) The implementer may treat 's' as a constant value, namely MSS. I.e., for every occurrence of 's', the implementer may substitute the MSS. A side effect of this choice is that the implementer may calculate loss event rate in packets/sec and ignore 's' entirely.

(2) The implementer may measure 's' using a moving average of packet sizes over the last 4 loss intervals. If you would like particular EWMA constants, I will provide them. As an implementer, I would base the moving average on the same weights used to calculate the average loss interval (see RFC 3448 section 8); in this case, this means 's' equals the average of four values, which are the average packet sizes over the last four loss intervals.

In addition I feel that, because this area is problematic, other implementation choices may be justified and useful. Such as packet size socket options, or virtual packets.


| I would want a future revision of RFC 4342 to consider something like the | "virtual packets" idea in [WBLB04]. I agree with their conclusion that | estimating packet sizes with estimators is danger fraught, and that one should | instead modify the way losses are defined. "Virtual packets" also feel like | TCP-ABC.
To me LIP Scaling seemed to be the most promising proposal,

The authors would disagree with you. "Adjusting LIP scaling to byte mode results is an even less favorable mechanism." ... "Two of the proposed mechanisms, random sampling and virtual packets, perform well over a wide range of different network conditions if the network drops packets independent of their size." ...

but we are entering a
domain here which was not my intention - we here need a consistent, working DCCP implementation based on existing standards-track specifications.

Then set 's=MSS'! As has been mentioned many times before. That is easiest. And it is clearly based on existing standards-track specifications.

Eddie

Reply via email to