Hi Gorry, all, 

We have posted a -03 version that features Gorry’s comments and Gorry’s work 
on making this document consistent with the recommendations document. 

The major modifications in the -03 version are:
- one new multi-aqm scenario;
- a conclusive table that clearly states the requirements for each scenario.

We are currently working on Jim’s review and we will be back as soon as 
possible.

Please see some comments on Gorry’s review inline. 


> On 01 May 2015, at 09:12, go...@erg.abdn.ac.uk wrote:
> 
> This is a review of AQM Evaluation Guidelines 
> (draft-kuhn-aqm-eval-guidelines-02)
> 

The name is wrong, but the reviewed document is: AQM Characterization 
Guidelines (draft-ietf-aqm-eval-guidelines-02)

> I think this document has a useful set of tests.
> 
> I think the document has value and an updated version of this document
> should be published. There are likely to be more variants of AQM schemes
> that emerge and this provides a valuable framework to help compare them
> and to provide better insight into their properties.
> 
> I sent a set of minor corrections and formatting issues to the document
> editors as a separate email - I’ll not list them here, since these are now
> planned for the next revision. I’ll focus on three topics:
> 
> (1) The document is a standards-track specification of tests. While the
> tests seemed generally good, I found the original recommendations
> inconsistent.
> 

We acknowledge that there might be some inconsistencies and ambiguity between 
MUSTs and SHOULDs, etc;
but thanks to Gorry’s help, we have made sure that the usage of these words is 
more accurate.
If there is still some inconsistency, we believe the conclusive table would 
help make the document more
consistent.
We also encourage people on ML to point out any potential inconsistency that 
may still exist. Our aim is to 
have zero inconsistency and complete clarity in the paper’s text. 

> Section 1: I think this section is informative. I propose moving the one
> new requirement (recommending documenting drop-tail comparison is placed
> in the relevant section.

This section is indeed informative. We propose to move the 
“
   One key objective behind formulating the guidelines is to help
   ascertain whether a specific AQM is not only better than drop-tail
   but also safe to deploy.  Testers therefore need to provide a
   reference document for their proposal discussing performance and
   deployment compared to those of drop-tail.
"

To the Methodology sections. 

> The language needs to be adjusted to avoid RFC
> 2119 language until this is defined.
> 

The paragraphs of the first section has been moved around to avoid this 
situation.

> Section 2: There is a RFC2119 keyword at the start of this section. I
> think to may be good to consider how this is expressed: “These metrics are
> RECOMMENDED to assess the performance of an AQM  scheme.” This could be
> better perhaps to say: “This section provides normative requirements for
> metrics that can be used to assess the performance of an AQM scheme.”
> 

Thanks.

> - I suggest rephrasing one sentence to:“ It is therefore not necessary to
> measure all of the following metrics, and guidance is provided for each
> metric.” This is to avoid the possibility that the reader thinks all
> recommendations are simply optional!
> 

Thanks.

> - Section 2.3: Packet loss synchronization - Does not make a
> recommendation, is this SHOULD, MUST or MAY - saying it is important is
> not sufficient. I suspect it is a “MUST”.
> 

We would say it is a “SHOULD” as this section only provides guidance on each 
metric - and it is not necessary to measure all of them. 

> - I think sections 3.1 and 3.2 are informative. Section 4.4 - likewise. In
> general, I think the RFC2119 usage needs to be tightened to clearly
> conclude what is to be tested in each case, whether the test is optional,
> etc.

We hope that the conclusive table would help to clearly stipulate the test 
scenarios using RFC2119 terminology.

Even if a test may be considered, the description of the test may include “must 
do this / that”, to clearly say what must be done
if the test is selected. We hope that we have made that clearer in the -03 
version. 

> Specifically, I think the language for the requirement should be to
> “evaluate” each of the test cases, not just to implement a scenario.
> 

Agree.

> - Section 12: This combines “SHOULD” with “sufficiently detailed” - is it
> ok to say SHOULD provide a detailed 
> or something that does leaves the 
> judgment of “sufficient” part to the evaluater, ensuring some detail is
> provided.
> 

Agree - the text now says : 

“   A description of each test setup SHOULD be detailed to allow this"

> - It may be useful to add a summary table. I do like the idea of a summary
> table at the end, but to be clear if this is done, please use the EXACT
> RFC2119 keywords from the sections, to prevent people detecting
> inconsistencies here.
> 

The conclusion now features a conclusive table summing up the requirement on 
each
test. 

> (2) Scenarios and test cases - I think we need to be clear (above) what is
> required.
> 
> I’m also suggest to introduce a multi-AQM scenario where we identify that
> there is a need to consider the implications of more than one congested
> bottleneck along a path that has an active AQM.  This case came out of
> Gen-ART review for the AQM Recommendations, and I think we should identify
> this as a test case: e.g. "“Transports operating under the control of AQM
> experience the effect of multiple control loops that react over different
> timescales. It is therefore important that proposed AQM schemes are seen
> to be stable when they are deployed at multiple points of potential
> congestion along an Internet path. The pattern of congestion signals (loss
> or ECN-marking) arising from AQM methods also need to not adversely
> interact with the dynamics of the transport protocols that they control.”
> 

We have added a multi-AQM scenario with a specific parking-lot topology. 
For the sake of simplicity, the AQM scheme should be the same in each router, 
but
alternative configurations could be considered. 

> (3) I have sent text (offlist) that attempts to correct some deviations in
> language usage from the AQM recommendations ID, and suggested to cite this
> where possible, rather than say things similar. These details proved to be
> minor, I suggested some additional points and rewording for others.
> 

Thanks.

> I think though a small amount of additional work is still needed to agree
> which test evaluations and discussions are a MUST and which are a MAY and
> there seems to be some contradictions, which willneed to be checked in the
> next revision.

Kind regards, 

The authors. 

> Best wishes,
> 
> Gorry
> 
> 
> _______________________________________________
> aqm mailing list
> aqm@ietf.org
> https://www.ietf.org/mailman/listinfo/aqm

_______________________________________________
aqm mailing list
aqm@ietf.org
https://www.ietf.org/mailman/listinfo/aqm

Reply via email to