> On May 18, 2016, at 9:29 PM 5/18/16, Jari Arkko <jari.ar...@piuha.net> wrote: > > Thanks for your review, Ralph!
You're welcome. I'm glad to hear you found the review valuable. Responses in line... > > I do think some of the points you raised need to be addressed. Inline: > >> >> ##### >> >> I often react to the use of RFC 2119 language in an Informational document >> by asking is that language really necessary? I'll ask the question here: in >> the context of this Informational document, which appears to be entirely >> advisory in providing guidelines, what does the use of RFC 2119 >> "requirements language" add to the meaning of the document. >> >>>> Authors: Indeed, the use of RFC 2119 language is not mandatory for such >>>> information document. However, using it enables us to introduce weight in >>>> the different parameterizations of the tests. Even though, it is not >>>> mandatory, we believe that it eases the reading of the document, for >>>> someone familiar with the IETF wording. > > I think that’s right. OK. >> ##### >> >> Figure 1 is not clear to me. Where are the physical links and interfaces? >> Are there multiple physical senders and receivers or are "senders A" >> instantiated on a single host (does it make a difference)? Are there >> static-sized buffers for each interface or do all the buffers share one >> memory space? >> >>>> Authors: We acknowledge that Figure 1 is not very clear. We have >>>> voluntarily omitted precisions on the amount of senders, receivers and >>>> traffic classes since the instantiation on a specific testbed would remove >>>> the generality of the figure and the described architecture. We believe >>>> that the text helps in reading the figure. Also, the rationale of this >>>> figure is to explain the notation more than going deeper in the topology >>>> that is anyway very generic. > > My opinion is that Figure 1 was very hard to read, even with reading the > text. I’d like to see some improvement in either the text or the figure. I can understand that the figure might be an illustrative abstraction, but I still think it would be helpful to have more detail in the text and some restructuring of the figure. >> ##### >> >> In section 3.1, is there a need to say something about the relative >> capacities of the various links and the rates at which the various flows >> generate traffic? >> >>>> Authors: These capacities are described in a later section when needed, >>>> and to remain high level and not focus on any applicability context >>>> (wi-fi, rural satellite access, fibber access, etc.) they are not >>>> specified for the whole document. The rates at which the flows generate >>>> traffic is specified for each further described scenario. > > OK OK >> ##### >> >> I would have trouble following the guidelines set out in section 4.3.1. I >> can understand the need for consideration of the tunable control parameters >> when comparing different AQM schemes. However, I don't know what >> "comparable" means for control parameters that are likely quite different >> between AQM schemes. I also think one would want to compare optimal control >> settings for the different schemes, to compare best-case performance. Or, >> for AQM schemes whose performance is highly dependent on operational >> conditions, one might want to compare settings that are sub-optimal for any >> particular test condition but that give better performance over a wide range >> of conditions. >> >>>> Authors: The intent of the first recommendation is to make testers be >>>> aware of which control points control which behavior and be conscious to >>>> make apples to apples comparison. >> To further precise this, we could change the text is section 4.3.1 as >> follows : >> "1. Similar control parameters and implications: Testers should be aware of >> the control parameters of the different schemes that control similar >> behavior. Testers should also be aware of the input value ranges and >> corresponding implications. For example, consider two different schemes - >> (A) queue-length based AQM scheme, and (B) queueing-delay based scheme. A >> and B are likely to have different kinds of control inputs to control the >> target delay - target queue length in A vs. target queuing delay in B, for >> example. Setting parameter values such as 100MB for A vs. 10ms for B will >> have different implications depending on evaluation context. Such >> context-dependent implications must be considered before drawing conclusions >> on performance comparisons. Also, it would be preferable if an AQM proposal >> listed such parameters and discussed how each relates to network >> characteristics such as capacity, average RTT etc.” > > OK for me I think the suggested text is OK, too. >> ##### >> >> Section 4.4 seems to give advice to the AQM designer rather than describe >> guidelines for characterization. Section 4.4 should either be rewritten to >> give guidelines for structuring measurements to account for varying packet >> sizes or the section should be elided. >> >>>> Authors: We could to modify text for 2nd paragraph of 4.4, if you think >>>> that this clarifies the issue. >> " An AQM scheme SHOULD adhere to the recommendations outlined in [RFC7141], >> and SHOULD NOT provide undue advantage to flows with smaller packets >> [RFC7567]. In order to evaluate if an AQM scheme is biased towards flows >> with smaller size packets, sender A in Figure 1 can be instantiated with two >> long standing TCP flows with different packet sizes - 500 bytes vs. 1500 >> bytes, respectively and metrics such as goodput, loss rate can be compared >> for these two flows. >> “ > > OK OK with me, too. >> ##### >> >> In section 4.5, what is the motivation for giving the advice about ECN to >> AQM designers? I can understand that ECN will have affect the impact of >> AQM, but for this document I think the section should focus on measurement >> guidlines that account for that impact. >> >>>> Authors: The scope of introducing this part is mostly related to remain >>>> the tester that if their AQM supports ECN, this must be presented, since >>>> it could be seen as an important move for the deployment of ECN > > I tend to agree with Ralph on this. > > I’m not sure why we need this for assessment or characterization. The purpose of the section isn't at all clear to me. >> ##### >> >> The specific topology in section 10 does not seem well-motivated to me. Why >> is router R with no AQM included in the topology? The choice of >> measurements is similarly not well-motivated. Why would it not be of >> interest to run all the tests described earlier in the document? >> >>>> Authors: It is worth pointing out that this specific topology is just a >>>> suggestion. The router without AQM has been included to remind the tester >>>> that the positioning of multiple AQMs have to be looked at with routers >>>> that do not have AQMs in mind. Since the placement of AQMs is mostly >>>> expected to be on "the last mile", we believe that evaluating all the >>>> tests described earlier in the document may not be needed. However, in >>>> case of multiple AQMs, their interactions should be considered. > > The text wasn’t clear in my mind that this is just a suggestion and/or an > example. Some clarification here would be helpful. - Ralph > > Jari >
signature.asc
Description: Message signed with OpenPGP using GPGMail
_______________________________________________ Gen-art mailing list Gen-art@ietf.org https://www.ietf.org/mailman/listinfo/gen-art