Re: [Gen-art] Gen-ART review of draft-ietf-aqm-eval-guidelines-11

2016-05-21 Thread Ralph Droms (rdroms)

> On May 18, 2016, at 9:29 PM 5/18/16, Jari Arkko  wrote:
> 
> Thanks for your review, Ralph!

You're welcome.  I'm glad to hear you found the review valuable.

Responses in line...

> 
> I do think some of the points you raised need to be addressed. Inline:
> 
>> 
>> #
>> 
>> I often react to the use of RFC 2119 language in an Informational document 
>> by asking is that language really necessary?  I'll ask the question here: in 
>> the context of this Informational document, which appears to be entirely 
>> advisory in providing guidelines, what does the use of RFC 2119 
>> "requirements language" add to the meaning of the document.
>> 
 Authors: Indeed, the use of RFC 2119 language is not mandatory for such 
 information document. However, using it enables us to introduce weight in 
 the different parameterizations of the tests. Even though, it is not 
 mandatory, we believe that it eases the reading of the document, for 
 someone familiar with the IETF wording.
> 
> I think that’s right.

OK.

>> #
>> 
>> Figure 1 is not clear to me.  Where are the physical links and interfaces?  
>> Are there multiple physical senders and receivers or are "senders A" 
>> instantiated on a single host (does it make a difference)?  Are there 
>> static-sized buffers for each interface or do all the buffers share one 
>> memory space?
>> 
 Authors: We acknowledge that Figure 1 is not very clear. We have 
 voluntarily omitted precisions on the amount of senders, receivers and 
 traffic classes since the instantiation on a specific testbed would remove 
 the generality of the figure and the described architecture. We believe 
 that the text helps in reading the figure. Also, the rationale of this 
 figure is to explain the notation more than going deeper in the topology 
 that is anyway very generic.
> 
> My opinion is that Figure 1 was very hard to read, even with reading the 
> text. I’d like to see some improvement in either the text or the figure.

I can understand that the figure might be an illustrative abstraction, but I 
still think it would be helpful to have more detail in the text and some 
restructuring of the figure.

>> #
>> 
>> In section 3.1, is there a need to say something about the relative 
>> capacities of the various links and the rates at which the various flows 
>> generate traffic?
>> 
 Authors: These capacities are described in a later section when needed, 
 and to remain high level and not focus on any applicability context 
 (wi-fi, rural satellite access, fibber access, etc.) they are not 
 specified for the whole document. The rates at which the flows generate 
 traffic is specified for each further described scenario.
> 
> OK

OK

>> #
>> 
>> I would have trouble following the guidelines set out in section 4.3.1.  I 
>> can understand the need for consideration of the tunable control parameters 
>> when comparing different AQM schemes.  However, I don't know what 
>> "comparable" means for control parameters that are likely quite different 
>> between AQM schemes.  I also think one would want to compare optimal control 
>> settings for the different schemes, to compare best-case performance.  Or, 
>> for AQM schemes whose performance is highly dependent on operational 
>> conditions, one might want to compare settings that are sub-optimal for any 
>> particular test condition but that give better performance over a wide range 
>> of conditions.
>> 
 Authors: The intent of the first recommendation is to make testers be 
 aware of which control points control which behavior and be conscious to 
 make apples to apples comparison.
>> To further precise this, we could change the text is section 4.3.1 as 
>> follows :
>> "1. Similar control parameters and implications: Testers should be aware of 
>> the control parameters of the different schemes that control similar 
>> behavior. Testers should also be aware of the input value ranges and 
>> corresponding implications. For example, consider two different schemes - 
>> (A) queue-length based AQM scheme, and (B) queueing-delay based scheme. A 
>> and B are likely to have different kinds of control inputs to control the 
>> target delay - target queue length in A vs. target queuing delay in B, for 
>> example. Setting parameter values such as 100MB for A vs. 10ms for B will 
>> have different implications depending on evaluation context.  Such 
>> context-dependent implications must be considered before drawing conclusions 
>> on performance comparisons. Also, it would be preferable if an AQM proposal 
>> listed such parameters and discussed how each relates to network 
>> characteristics such as capacity, average RTT etc.”
> 
> OK for me

I think the suggested text is OK, too.

>> #
>> 
>> Section 4.4 seems to give advice to the AQM designer rather than describe 
>> guidelines for characterization.  Section 4.4 should either be rewritten to 
>> give 

Re: [Gen-art] Gen-ART review of draft-ietf-aqm-eval-guidelines-11

2016-05-18 Thread Jari Arkko
Thanks for your review, Ralph!

I do think some of the points you raised need to be addressed. Inline:

> 
> #
> 
> I often react to the use of RFC 2119 language in an Informational document by 
> asking is that language really necessary?  I'll ask the question here: in the 
> context of this Informational document, which appears to be entirely advisory 
> in providing guidelines, what does the use of RFC 2119 "requirements 
> language" add to the meaning of the document.
> 
>>> Authors: Indeed, the use of RFC 2119 language is not mandatory for such 
>>> information document. However, using it enables us to introduce weight in 
>>> the different parameterizations of the tests. Even though, it is not 
>>> mandatory, we believe that it eases the reading of the document, for 
>>> someone familiar with the IETF wording.

I think that’s right.

> 
> #
> 
> Figure 1 is not clear to me.  Where are the physical links and interfaces?  
> Are there multiple physical senders and receivers or are "senders A" 
> instantiated on a single host (does it make a difference)?  Are there 
> static-sized buffers for each interface or do all the buffers share one 
> memory space?
> 
>>> Authors: We acknowledge that Figure 1 is not very clear. We have 
>>> voluntarily omitted precisions on the amount of senders, receivers and 
>>> traffic classes since the instantiation on a specific testbed would remove 
>>> the generality of the figure and the described architecture. We believe 
>>> that the text helps in reading the figure. Also, the rationale of this 
>>> figure is to explain the notation more than going deeper in the topology 
>>> that is anyway very generic.

My opinion is that Figure 1 was very hard to read, even with reading the text. 
I’d like to see some improvement in either the text or the figure.

> #
> 
> In section 3.1, is there a need to say something about the relative 
> capacities of the various links and the rates at which the various flows 
> generate traffic?
> 
>>> Authors: These capacities are described in a later section when needed, and 
>>> to remain high level and not focus on any applicability context (wi-fi, 
>>> rural satellite access, fibber access, etc.) they are not specified for the 
>>> whole document. The rates at which the flows generate traffic is specified 
>>> for each further described scenario.

OK

> #
> 
> I would have trouble following the guidelines set out in section 4.3.1.  I 
> can understand the need for consideration of the tunable control parameters 
> when comparing different AQM schemes.  However, I don't know what 
> "comparable" means for control parameters that are likely quite different 
> between AQM schemes.  I also think one would want to compare optimal control 
> settings for the different schemes, to compare best-case performance.  Or, 
> for AQM schemes whose performance is highly dependent on operational 
> conditions, one might want to compare settings that are sub-optimal for any 
> particular test condition but that give better performance over a wide range 
> of conditions.
> 
>>> Authors: The intent of the first recommendation is to make testers be aware 
>>> of which control points control which behavior and be conscious to make 
>>> apples to apples comparison.
> To further precise this, we could change the text is section 4.3.1 as follows 
> :
> "1. Similar control parameters and implications: Testers should be aware of 
> the control parameters of the different schemes that control similar 
> behavior. Testers should also be aware of the input value ranges and 
> corresponding implications. For example, consider two different schemes - (A) 
> queue-length based AQM scheme, and (B) queueing-delay based scheme. A and B 
> are likely to have different kinds of control inputs to control the target 
> delay - target queue length in A vs. target queuing delay in B, for example. 
> Setting parameter values such as 100MB for A vs. 10ms for B will have 
> different implications depending on evaluation context.  Such 
> context-dependent implications must be considered before drawing conclusions 
> on performance comparisons. Also, it would be preferable if an AQM proposal 
> listed such parameters and discussed how each relates to network 
> characteristics such as capacity, average RTT etc.”

OK for me

> #
> 
> Section 4.4 seems to give advice to the AQM designer rather than describe 
> guidelines for characterization.  Section 4.4 should either be rewritten to 
> give guidelines for structuring measurements to account for varying packet 
> sizes or the section should be elided.
> 
>>> Authors: We could to modify text for 2nd paragraph of 4.4, if you think 
>>> that this clarifies the issue.
> " An AQM scheme SHOULD adhere to the recommendations outlined in [RFC7141], 
> and SHOULD NOT provide undue advantage to flows with smaller packets 
> [RFC7567]. In order to evaluate if an AQM scheme is biased towards flows with 
> smaller size packets

Re: [Gen-art] Gen-ART review of draft-ietf-aqm-eval-guidelines-11

2016-05-17 Thread Kuhn Nicolas
Hi again, 

FYI, attached to this email is what could be a new ID for the document. 

Kind regards,

Nicolas



-Message d'origine-
De : Kuhn Nicolas 
Envoyé : mardi 17 mai 2016 10:31
À : Ralph Droms (rdroms); gen-art@ietf.org
Cc : draft-ietf-aqm-eval-guidelines@tools.ietf.org; Mirja Kuehlewind (IETF)
Objet : RE: Gen-ART review of draft-ietf-aqm-eval-guidelines-11

Thanks a lot for this review. We have proposed answer to the different points 
and hope that this is more clear. When we believe that it was needed, we have 
suggested text changes. If you think that this should result in the submission 
of a new ID, please let us know. 

Kind regards, 

The authors

-Message d'origine-
De : Ralph Droms (rdroms) [mailto:rdr...@cisco.com] Envoyé : jeudi 28 avril 
2016 21:23 À : gen-art@ietf.org Cc : 
draft-ietf-aqm-eval-guidelines@tools.ietf.org
Objet : Gen-ART review of draft-ietf-aqm-eval-guidelines-11

I am the assigned Gen-ART reviewer for this draft. The General Area Review Team 
(Gen-ART) reviews all IETF documents being processed by the IESG for the IETF 
Chair.  Please treat these comments just like any other last call comments.

For more information, please see the FAQ at

.

Document: draft-ietf-aqm-eval-guidelines-11
Reviewer: Ralph Droms
Review Date: 2016-04-28
IETF LC End Date: 2016-05-04
IESG Telechat date: 2016-05-19

Summary: This draft is on the right track but has open issues, described in the 
review.

In general, I think the document could be read, implemented and used to 
generate useful characterizations of AQM schemes.  However, the motivations for 
some of the measurements and scenarios seems weak to me, which might compromise 
the weight given to the conclusions drawn from the guidelines.

Major issues:

None.  However, the list of minor issues and nits, taken together, could be 
considered a major issue to be resolved before publication.

Minor issues:

#

I often react to the use of RFC 2119 language in an Informational document by 
asking is that language really necessary?  I'll ask the question here: in the 
context of this Informational document, which appears to be entirely advisory 
in providing guidelines, what does the use of RFC 2119 "requirements language" 
add to the meaning of the document.

>> Authors: Indeed, the use of RFC 2119 language is not mandatory for such 
>> information document. However, using it enables us to introduce weight in 
>> the different parameterizations of the tests. Even though, it is not 
>> mandatory, we believe that it eases the reading of the document, for someone 
>> familiar with the IETF wording.

#

Figure 1 is not clear to me.  Where are the physical links and interfaces?  Are 
there multiple physical senders and receivers or are "senders A" instantiated 
on a single host (does it make a difference)?  Are there static-sized buffers 
for each interface or do all the buffers share one memory space?

>> Authors: We acknowledge that Figure 1 is not very clear. We have voluntarily 
>> omitted precisions on the amount of senders, receivers and traffic classes 
>> since the instantiation on a specific testbed would remove the generality of 
>> the figure and the described architecture. We believe that the text helps in 
>> reading the figure. Also, the rationale of this figure is to explain the 
>> notation more than going deeper in the topology that is anyway very generic. 
>>

#

In section 3.1, is there a need to say something about the relative capacities 
of the various links and the rates at which the various flows generate traffic?

>> Authors: These capacities are described in a later section when needed, and 
>> to remain high level and not focus on any applicability context (wi-fi, 
>> rural satellite access, fibber access, etc.) they are not specified for the 
>> whole document. The rates at which the flows generate traffic is specified 
>> for each further described scenario.

#

I would have trouble following the guidelines set out in section 4.3.1.  I can 
understand the need for consideration of the tunable control parameters when 
comparing different AQM schemes.  However, I don't know what "comparable" means 
for control parameters that are likely quite different between AQM schemes.  I 
also think one would want to compare optimal control settings for the different 
schemes, to compare best-case performance.  Or, for AQM schemes whose 
performance is highly dependent on operational conditions, one might want to 
compare settings that are sub-optimal for any particular test condition but 
that give better performance over a wide range of conditions.

>> Authors: The intent of the first recommendation is to make testers be aware 
>> of which control points control which behavior and be conscious to make 
>> apples to apples comparison. 
To further precise this, we could change the text is section 4.3.1 as follows :
"1. Similar control

Re: [Gen-art] Gen-ART review of draft-ietf-aqm-eval-guidelines-11

2016-05-17 Thread Kuhn Nicolas
Thanks a lot for this review. We have proposed answer to the different points 
and hope that this is more clear. When we believe that it was needed, we have 
suggested text changes. If you think that this should result in the submission 
of a new ID, please let us know. 

Kind regards, 

The authors

-Message d'origine-
De : Ralph Droms (rdroms) [mailto:rdr...@cisco.com] 
Envoyé : jeudi 28 avril 2016 21:23
À : gen-art@ietf.org
Cc : draft-ietf-aqm-eval-guidelines@tools.ietf.org
Objet : Gen-ART review of draft-ietf-aqm-eval-guidelines-11

I am the assigned Gen-ART reviewer for this draft. The General Area Review Team 
(Gen-ART) reviews all IETF documents being processed by the IESG for the IETF 
Chair.  Please treat these comments just like any other last call comments.

For more information, please see the FAQ at

.

Document: draft-ietf-aqm-eval-guidelines-11
Reviewer: Ralph Droms
Review Date: 2016-04-28
IETF LC End Date: 2016-05-04
IESG Telechat date: 2016-05-19

Summary: This draft is on the right track but has open issues, described in the 
review.

In general, I think the document could be read, implemented and used to 
generate useful characterizations of AQM schemes.  However, the motivations for 
some of the measurements and scenarios seems weak to me, which might compromise 
the weight given to the conclusions drawn from the guidelines.

Major issues:

None.  However, the list of minor issues and nits, taken together, could be 
considered a major issue to be resolved before publication.

Minor issues:

#

I often react to the use of RFC 2119 language in an Informational document by 
asking is that language really necessary?  I'll ask the question here: in the 
context of this Informational document, which appears to be entirely advisory 
in providing guidelines, what does the use of RFC 2119 "requirements language" 
add to the meaning of the document.

>> Authors: Indeed, the use of RFC 2119 language is not mandatory for such 
>> information document. However, using it enables us to introduce weight in 
>> the different parameterizations of the tests. Even though, it is not 
>> mandatory, we believe that it eases the reading of the document, for someone 
>> familiar with the IETF wording.

#

Figure 1 is not clear to me.  Where are the physical links and interfaces?  Are 
there multiple physical senders and receivers or are "senders A" instantiated 
on a single host (does it make a difference)?  Are there static-sized buffers 
for each interface or do all the buffers share one memory space?

>> Authors: We acknowledge that Figure 1 is not very clear. We have voluntarily 
>> omitted precisions on the amount of senders, receivers and traffic classes 
>> since the instantiation on a specific testbed would remove the generality of 
>> the figure and the described architecture. We believe that the text helps in 
>> reading the figure. Also, the rationale of this figure is to explain the 
>> notation more than going deeper in the topology that is anyway very generic. 
>>

#

In section 3.1, is there a need to say something about the relative capacities 
of the various links and the rates at which the various flows generate traffic?

>> Authors: These capacities are described in a later section when needed, and 
>> to remain high level and not focus on any applicability context (wi-fi, 
>> rural satellite access, fibber access, etc.) they are not specified for the 
>> whole document. The rates at which the flows generate traffic is specified 
>> for each further described scenario.

#

I would have trouble following the guidelines set out in section 4.3.1.  I can 
understand the need for consideration of the tunable control parameters when 
comparing different AQM schemes.  However, I don't know what "comparable" means 
for control parameters that are likely quite different between AQM schemes.  I 
also think one would want to compare optimal control settings for the different 
schemes, to compare best-case performance.  Or, for AQM schemes whose 
performance is highly dependent on operational conditions, one might want to 
compare settings that are sub-optimal for any particular test condition but 
that give better performance over a wide range of conditions.

>> Authors: The intent of the first recommendation is to make testers be aware 
>> of which control points control which behavior and be conscious to make 
>> apples to apples comparison. 
To further precise this, we could change the text is section 4.3.1 as follows :
"1. Similar control parameters and implications: Testers should be aware of the 
control parameters of the different schemes that control similar behavior. 
Testers should also be aware of the input value ranges and corresponding 
implications. For example, consider two different schemes - (A) queue-length 
based AQM scheme, and (B) queueing-delay based scheme. A and B are likely to 
have differen

[Gen-art] Gen-ART review of draft-ietf-aqm-eval-guidelines-11

2016-04-28 Thread Ralph Droms (rdroms)
I am the assigned Gen-ART reviewer for this draft. The General Area
Review Team (Gen-ART) reviews all IETF documents being processed
by the IESG for the IETF Chair.  Please treat these comments just
like any other last call comments.

For more information, please see the FAQ at

.

Document: draft-ietf-aqm-eval-guidelines-11
Reviewer: Ralph Droms
Review Date: 2016-04-28
IETF LC End Date: 2016-05-04
IESG Telechat date: 2016-05-19

Summary: This draft is on the right track but has open issues, described in the 
review.

In general, I think the document could be read, implemented and used to 
generate useful characterizations of AQM schemes.  However, the motivations for 
some of the measurements and scenarios seems weak to me, which might compromise 
the weight given to the conclusions drawn from the guidelines.

Major issues:

None.  However, the list of minor issues and nits, taken together, could be 
considered a major issue to be resolved before publication.

Minor issues:

I often react to the use of RFC 2119 language in an Informational document by 
asking is that language really necessary?  I'll ask the question here: in the 
context of this Informational document, which appears to be entirely advisory 
in providing guidelines, what does the use of RFC 2119 "requirements language" 
add to the meaning of the document.

Figure 1 is not clear to me.  Where are the physical links and interfaces?  Are 
there multiple physical senders and receivers or are "senders A" instantiated 
on a single host (does it make a difference)?  Are there static-sized buffers 
for each interface or do all the buffers share one memory space?

In section 3.1, is there a need to say something about the relative capacities 
of the various links and the rates at which the various flows generate traffic?

I would have trouble following the guidelines set out in section 4.3.1.  I can 
understand the need for consideration of the tunable control parameters when 
comparing different AQM schemes.  However, I don't know what "comparable" means 
for control parameters that are likely quite different between AQM schemes.  I 
also think one would want to compare optimal control settings for the different 
schemes, to compare best-case performance.  Or, for AQM schemes whose 
performance is highly dependent on operational conditions, one might want to 
compare settings that are sub-optimal for any particular test condition but 
that give better performance over a wide range of conditions.

Section 4.4 seems to give advice to the AQM designer rather than describe 
guidelines for characterization.  Section 4.4 should either be rewritten to 
give guidelines for structuring measurements to account for varying packet 
sizes or the section should be elided.

In section 4.5, what is the motivation for giving the advice about ECN to AQM 
designers?  I can understand that ECN will have affect the impact of AQM, but 
for this document I think the section should focus on measurement guidlines 
that account for that impact.

The specific topology in section 10 does not seem well-motivated to me.  Why is 
router R with no AQM included in the topology?  The choice of measurements is 
similarly not well-motivated.  Why would it not be of interest to run all the 
tests described earlier in the document?

Nits/editorial comments:

There are several instances of the word "advice" which should be replaced with 
"advise"; e.g., in section 2.3.

Last sentence of the abstract: I don't get the meaning of "precautionary 
characterizations of AQM schemes".  I recommend that the phrase be reworded

Section 1, first paragraph: The last sentence doesn't follow the rest of the 
paragraph and I recommend that it be elided.

Section 1, third paragraph: This text is redundant with the text in the 
Glossary section:

   When speaking of a specific queue in this
   document, "buffer occupancy" refers to the amount of data (measured
   in bytes or packets) that are in the queue, and the "maximum buffer
   size" refers to the maximum buffer occupancy.

Section 1, third paragraph:

OLD:

   In real
   implementations of switches, a global memory is often shared between
   the available devices, and thus, the maximum buffer size may vary
   over the time.

NEW:

   In switches and routers, a global memory space is often shared
   between the available interfaces, and thus, the maximum buffer size
   for any given interface may vary over the time.

Section 1, fifth paragraph, last sentence: Is this document just concerned with 
"deployability" or more generally with "applicability, performance and 
deployability"?

Section 1.1, first paragraph: Would it be helpful to qualify "goodput" as 
"goodput in individual flows", to contrast with "goodput at a router"?  If 
"goodput" is well-known in this community to be "flow goodput", no change is 
needed.

Section 1.1, second paragraph: What is "BDP", as in "BDP-sized buffer"?

Se