Re: [aqm] TCP ACK Suppression
Is this specialized upstream TCP ACK handling, particularly the prioritization a general recommendation in all access technologies? Perhaps it should be, since otherwise up and downstream TCP flows interfere in a crazy queue oscillation that is typically misinterpreted by AQMs. Is this topic addressed in some RFC already? Wolfram > -Ursprüngliche Nachricht- > Von: aqm [mailto:aqm-boun...@ietf.org] Im Auftrag von Greg White > Gesendet: Dienstag, 6. Oktober 2015 18:35 > An: Mikael Abrahamsson; aqm@ietf.org > Betreff: Re: [aqm] TCP ACK Suppression > > Mikael, > > Specialized upstream TCP ACK handling (which can include both > prioritization and suppression) is a recommended feature in the DOCSIS > specification. The details of the implementation are left to the > manufacturer, but I don't expect that it is actually done at dequeue (packet > processing at dequeue is expensive in cable modems). Rather, I expect that > devices identify ACKs at enqueue, and retain (separate from the main > service-flow queue) a single ACK for each TCP session. Then, upon receiving > a grant, the ACK queue is flushed first, followed by packets from the main > queue. > > The CM is not permitted to issue bandwidth requests for more data than it > has available to send, so bandwidth requests would need to already have > ACK suppression taken into account. For this reason (and the above), I > doubt that the CM would include suppressed ACKs in its queue depth and > queuing latency estimation. > > AQM in DOCSIS also happens at enqueue. The spec is silent on whether the > upstream TCP ACKs are subject to AQM packet drop, but it would be > compliant for them (i.e. the one ACK per session) to be protected. > > -Greg > > > On 10/6/15, 1:20 AM, "aqm on behalf of Mikael Abrahamsson" >wrote: > > > > >Hi, > > > >after noticing that some TCP ACKs on my home DOCSIS connection were > not > >making it to their destination, I after some interaction with cable > >Internet people, I found this: > > > >http://www.cedmagazine.com/article/2006/12/docsis-sub-throughput- > optimi > >zat > >ion > > > >"TCP ACK Suppression (TAS)" > > > >"TCP ACK Suppression overcomes the TRGC limitation without actually > >affecting the DOCSIS specification or involving the CMTS. It improves > >downstream TCP transmissions by taking advantage of TRGC and only > >sending the last ACK it receives when its data grant becomes active. > >Thus, the number of TCP ACKs is fewer, but the number of bytes > >acknowledged by each TCP ACK is increased." > > > >So the DOCSIS modem basically looks at all the ACKs in the queue at the > >time of transmission (DOCSIS uses a "grant" system to tell a modem when > >it's allowed to transmit on the shared medium), and then basically > >deletes all the redundant ACKs (the ones who are just increasing > >linearly without indicating packet drop) and keeps the highest ACK > >only. > > > >Now, this kind of mechanism, how should it be treated when it comes to > >AQM? This mechanism is basically done at de-queue, when a number of > >packets are emptied from the queue at one time, which is then allowed > >to fill up again until the next transmit opportunity arises. > > > >Or is this a non-problem because it's likely that any AQM employed here > >would use the buffer fill right after a transmit opportunity has > >finished (for those that consider buffer fill as a variable), which > >would mean that most likely the TCP ACK purging had already occured so > >this mechanism doesn't influence the AQM in any significant manner > >anyway? > > > >Just as a data point from my home connection, I have 250/50 (down/up) > >and when downloading at 250 megabit/s, the upstream traffic is reduced > >by approximately 20x, so instead of sending 10 megabit/s (or so) of > >ACKs, I see approximately 500 kilobits/s of ACKs. > > > >-- > >Mikael Abrahamssonemail: swm...@swm.pp.se > > > >___ > >aqm mailing list > >aqm@ietf.org > >https://www.ietf.org/mailman/listinfo/aqm > > ___ > aqm mailing list > aqm@ietf.org > https://www.ietf.org/mailman/listinfo/aqm smime.p7s Description: S/MIME cryptographic signature ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
Re: [aqm] PIE vs. RED
Delayed-based RED still would associate latency with drop probability: drop probability will only go up when queueing latency goes up. A higher drop probability can only be achieved via higher queueing latency. If following Bob's statements last IETF, this could even be a desirable feature: Let the queue grow a little bit further to avoid drop probabilities 20%, or prevent AQM's attempts to lower CWND below 2 at small RTTs. It is just that AQM is an extremely multidimensional problem space... Wolfram smime.p7s Description: S/MIME cryptographic signature ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
Re: [aqm] WGLC on draft-ietf-aqm-eval-guidelines
Hi all, I read the latest version of the draft, and I found it useful. The draft addresses a comprehensive range of topics for AQM characterization. What I am not so happy with, is the description of the corresponding experiments. Some critical points of my first review https://mailarchive.ietf.org/arch/msg/aqm/OwPTGmXLpEmCChpgE7ZFqFnnT64 still persist. I would like to regard these experiments as initial proposals (which is good to have) that might undergo substantial revision in practice later on. In general I have the feeling that the combinatorial number of mandatory experiments is close to infinity. Not only that I doubt this will ever be done; but who is subsequently going to judge the huge amount of results? Here are some minor comments: Section 2.7 defines goodput/delay scatter plots in two different ways: On with reference to [HAYE2013], the other definition with reference to [WINS2014]. I would prefer to have only one definition, namely [WINS2014]. - [HAYE2013] depends on a parameter variation across certain range (e.g. traffic load, or buffer size) that is not defined in most of our experiments. - [WINS2014] depends only on randomized replication of otherwise identical experiments. This should be applicable to any of the evaluation experiments. (In fact, it is unavoidable anyway.) Section 4.3: The term long-lived non application-limited UDP is somewhat infinite bandwidth. What the authors probably mean is long-lived UDP flow from unresponsive application to make it clear that no application layer congestion control is present like in NFS. Section 2.1: Formula on flow completion time: mismatch of dimensions (Byte vs. Mbps) Wolfram Lautenschlaeger -Ursprüngliche Nachricht- Von: aqm [mailto:aqm-boun...@ietf.org] Im Auftrag von Wesley Eddy Gesendet: Montag, 10. August 2015 15:44 An: aqm@ietf.org Betreff: [aqm] WGLC on draft-ietf-aqm-eval-guidelines As chairs, Richard and I would like to start a 2-week working group last call on the AQM characterization guidelines: https://datatracker.ietf.org/doc/draft-ietf-aqm-eval-guidelines/ Please make a review of this, and send comments to the list or chairs. Any comments that you might have will be useful to us, even if it's just to say that you've read it and have no other comments. Thanks! -- Wes Eddy MTI Systems ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm smime.p7s Description: S/MIME cryptographic signature ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
Re: [aqm] AQM hurts utilization with a single TCP stream?
In steady state and single flow operation for 100% link utilization Reno requires a full BDP queue, CUBIC requires 40% BDP. This is independent of AQM or not. A tail-drop queue of the same size as CoDel's drop level performs exactly the same way in these circumstances. (I did the experiments.) The benefit of AQM comes into play with many TCP flows in parallel. Then the queue size requirement goes down to something like BDP/sqrt(N) or even BDP/N (with N the number of flows). AQMs can earn this gain, tail-drop can not, it stays at BDP. For CoDel + CUBIC @100ms RTT that means, it's default 5ms target does not hurt, if more than 5 to 10 flows are present. (Taking into account that it drops slightly above the target.) Wolfram For throughput to be unaffected with a single TCP-Reno stream the buffer must grow to contain half the BDP, since the window will be halved on loss. With TCP-CUBIC the buffer must contain 20% of the BDP. CoDEL will make it's first drop when the sojourn time remains 5ms, so with TCP-Reno, if the RTT is 10ms, then utilization will be hurt. With TCP-CUBIC if the RTT is 25ms then utilization will be hurt. Is this correct? Simon ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
Re: [aqm] Comments on draft-ietf-aqm-eval-guidelines-01?
Hi, Bi-directional traffic is mentioned in section 3.1 Topology and 4.5 Traffic Mix, but not further detailed. I suggest to add at least one scenario in section 4.5, where both directions are congested at the same time, e.g. two or more counter propagating bulk TCP transfers. Only in this scenario the returning ACK packets undergo a reasonable queuing delay (and jitter) from the opposite direction queue. If I'm right, Dave Taht repeatedly mentioned that scenario as critical. And he is right, I tried it out. Wolfram Von: aqm [mailto:aqm-boun...@ietf.org] Im Auftrag von Naeem Khademi Gesendet: Freitag, 6. März 2015 16:18 An: aqm@ietf.org Betreff: [aqm] Comments on draft-ietf-aqm-eval-guidelines-01? Hi all Any comments on the newly submitted update as draft-ietf-aqm-eval-guidelines-01 is welcomed. In the new version, we have tried to address the issues brought up on the ML as well as the feedback we received at the IETF-91 and have tried to incorporate them all. We have also clarified several issues in the text making it more straightforward and less ambiguous with regards to the guidelines and scenarios. We would like to have this document discussed on the ML preferably before the 9th March cut-off date as well as during the period prior to IETF-92. Cheers, Authors ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
Re: [aqm] review draft-kuhn-aqm-eval-guidelines-02
Dear Nicolas, No, sorry, but I cannot agree with the proposed modifications. My central point is the frequent repetition of Graphs described in Section 2.6 MUST be generated. For such central role Section 2.6 should be carefully evaluated itself. But I'm not convinced, this was the case. E.g. measuring the drop probability every 1 second (as you propose) looks somewhat strange to me, if I think of own experiments where I encounter drop bursts every 10 seconds and silence in between (which is a quite common scenario). Could you, please, point us to some publication where your recent modification of Section 2.6 has been practiced and useful conclusions could be drawn. Kind regards, Wolfram Von: aqm [mailto:aqm-boun...@ietf.org] Im Auftrag von Nicolas KUHN Gesendet: Donnerstag, 28. August 2014 17:40 An: LAUTENSCHLAEGER, Wolfram (Wolfram) Cc: aqm@ietf.org Betreff: Re: [aqm] review draft-kuhn-aqm-eval-guidelines-02 Dear Wolfram, Thanks a lot for your useful feedback. Pleas, have a look at my answer inline. Kind regards, Nicolas On Aug 12, 2014, at 10:04 AM, LAUTENSCHLAEGER, Wolfram (Wolfram) wolfram.lautenschlae...@alcatel-lucent.commailto:wolfram.lautenschlae...@alcatel-lucent.com wrote: I agree with most of the suggested features that must be tested for AQM evaluation. But I have some doubts if the proposed experiments/metrics are really applicable and able to reveal the required features. My comments in detail: Section 2.1: Flow completion time: Is applicable only for equally sized finite flows. Not really meaningful for variable sized flows, e.g. the Tmix trace. Not applicable to infinite flows. This comment points out that the metrics may not suit to every traffic detailed in the rest of the draft. As, in this Section 2, we just list the metrics that can be measured, I propose to add the following paragraph in the beginning of Section 2: The metrics listed in this section may not suit every type of traffic detailed in the rest of this document: all of the following metrics MAY not be measured. For each scenario, the metrics should be selected among the following list, depending on the traffic considered. Section 2.2: Packet loss: - Long term loss probability is meaningful only in a steady state scenario. And it characterizes the TCP flavor, not the AQM. (Loss probability remains the same, whatever you do with AQM, as long as you reach roughly the same throughput.) The long term loss probability might not be of interests for all the scenarios. In order to make the presentation of this metric more general, we could evaluate the packet drop probability over the time of the experiment. I propose to replace the long term packet loss probability by : - the packet loss probability: this metric should be frequently measured during the experiment as the long term loss probability is of interests for steady state scenarios only; - interval between consecutive losses: If the losses are well spaced, it resembles somehow the loss probability. If not well spaced (bursty), what to record? To illustrate the interval between losses, we could propose to measure the time between two losses and provide the minimum, 5%, median, 95%, maximum values of the set of interval times. I propose to replace the interval between consecutive losses by : - the interval between consecutive losses: the time between two losses should be measured. From the set of interval times, the tester should present the median value, the minimum and maximum values and the 10th and 90th percentiles. - packet loss patterns: Metric is undefined, except the special case packet loss synchronization, next section, 2.3. It is, indeed, highly interesting qualitatively in non-stationary cases, e.g. abrupt capacity drop. But how to quantify? I agree that it will be hard to quantify this metric. The idea behind the packet loss pattern, is to have a description of the loss pattern, that might be obtained by considering the loss probability and the interval between consecutive losses. As a result, we will remove the packet loss patterns for the next version of the document. Section 2.4: Goodput: - Meaningful only with steady state occupancy by a number of more or less greedy TCP flows. Here it shows, to which extent the AQM is able to keep a link close to the 100% utilization. - With a trace of variable sized flows (Tmix) the goodput resembles the traffic offer (if in total below the link capacity; not overloaded). - The overload scenario does not reach steady state. Goodput in overload cases is highly dependent on other things than AQM, e.g. test duration or shuffling of the trace. Indeed, some things need to be clarified. I propose to add the following paragraph: The measurement of the goodput let the tester evaluate to which extend the AQM is able to keep an high link utilization. This metric should be obtained frequently during the experiment: the long term goodput makes
[aqm] WG: New Version Notification for draft-lauten-aqm-gsp-01.txt
Dear all, I posted an updated draft on Global Synchronization Protection (GSP). Initially focused on global synchronization alone, it turns out that GSP essentially performs at the same level as the other AQMs. Major changes are wrt - a more detailed description of the parameter adaptation algorithm - an additional section on delay based operation Comments are highly welcome. Thanks, Wolfram Lautenschlaeger -Ursprüngliche Nachricht- Von: internet-dra...@ietf.org [mailto:internet-dra...@ietf.org] Gesendet: Freitag, 4. Juli 2014 14:12 An: LAUTENSCHLAEGER, Wolfram (Wolfram); LAUTENSCHLAEGER, Wolfram (Wolfram) Betreff: New Version Notification for draft-lauten-aqm-gsp-01.txt A new version of I-D, draft-lauten-aqm-gsp-01.txt has been successfully submitted by Wolfram Lautenschlaeger and posted to the IETF repository. Name: draft-lauten-aqm-gsp Revision: 01 Title: Global Synchronization Protection for Packet Queues Document date: 2014-07-04 Group: Individual Submission Pages: 10 URL:http://www.ietf.org/internet-drafts/draft-lauten-aqm-gsp-01.txt Status: https://datatracker.ietf.org/doc/draft-lauten-aqm-gsp/ Htmlized: http://tools.ietf.org/html/draft-lauten-aqm-gsp-01 Diff: http://www.ietf.org/rfcdiff?url2=draft-lauten-aqm-gsp-01 Abstract: The congestion avoidance processes of several transmission capacity sharing TCP flows tend to be synchronized among each other, so that the rate variations of the individual flows do not compensate. In contrary, they accumulate into large variations of the whole aggregate. The effect is known as global synchronization. Large queuing buffer demand and large latency and jitter are the consequences. Global Synchronization Protection (GSP) is an extension of regular tail drop packet queuing schemes that prevents global synchronization. For large traffic aggregates the de-correlation between the individual flow variations reduces buffer demand and packet sojourn time by an order of magnitude and more. Even though quite simple, the solution has a theoretical background and is not heuristic. It has been tested with a Linux kernel implementation and shows equivalent performance as other relevant AQM schemes. Please note that it may take a couple of minutes from the time of submission until the htmlized version and diff are available at tools.ietf.org. The IETF Secretariat ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
Re: [aqm] references on global sync lock-out
I published a paper that reveals the root cause of global synchronization: A Deterministic TCP Bandwidth Sharing Model, http://arxiv.org/pdf/1404.4173v1 The findings in the paper are the theoretical basis for the GSP AQM proposal in http://tools.ietf.org/html/draft-lauten-aqm-gsp-00 -- Wolfram Lautenschlaeger Bell Labs -Ursprüngliche Nachricht- Von: aqm [mailto:aqm-boun...@ietf.org] Im Auftrag von Wesley Eddy Gesendet: Dienstag, 24. Juni 2014 20:28 An: aqm@ietf.org Betreff: [aqm] references on global sync lock-out Per the discussion in the telecon today about references for global synchronization and lock-out, I think some excellent classic ones are: Global Synchronization: Zhang Clark, Oscillating Behavior of Network Traffic: A Case Study Simulation, 1990 http://groups.csail.mit.edu/ana/Publications/Zhang-DDC- Oscillating-Behavior-of-Network-Traffic-1990.pdf Floyd Jacobson, The Synchronization of Periodic Routing Messages, 1994 http://ee.lbl.gov/papers/sync_94.pdf Lock-Out: Floyd Jacobson, On Traffic Phase Effects in Packet-Switched Gateways, 1992 http://www.icir.org/floyd/papers/phase.pdf -- Wes Eddy MTI Systems ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
Re: [aqm] last call results on draft-ietf-aqm-recommendation
I strongly agree with Bob's concerns: - Congestion collapse cannot be prevented by AQM - Flow fairness is not a topic for AQM - AQM should be possible without any knowledge of particular flows. As such it is clearly a L2 mechanism (which does not mean that it cannot be applied in L3 boxes). Of course, in technical implementations an AQM could be combined with fairness mechanisms like the fq_X proposals do. (I assume that in such combination AQM and scheduling essentially operate on the same queue or group of queues.) But for a clear understanding, what a particular AQM does, I would prefer to see the AQM algorithm in isolation. AQM should be applied to every buffer that can be overloaded and TCP is involved, i.e. where ingress rate can be higher the egress rate and no backpressure is in place. In middleboxes with ingress == egress rates this is not the case, except the processing capacity is insufficient. In my opinion AQM applies to multiplexed links, where several flows share the same transmission capacity. AQM cannot do a lot for a single flow on one link. In the capacity sharing case the effects of synchronization and burstiness of drops are strongly tied together. Removing the one also removes the other and this is the most what AQM can afford. Wolfram -Ursprüngliche Nachricht- Von: aqm [mailto:aqm-boun...@ietf.org] Im Auftrag von Bob Briscoe Gesendet: Donnerstag, 15. Mai 2014 16:32 An: Wesley Eddy; Fred Baker; Gorry Fairhurst Cc: aqm@ietf.org Betreff: Re: [aqm] last call results on draft-ietf-aqm-recommendation Wes, Thx. In case I don't get time to read, then type, I'll shoot my mouth off anyway... Sorry this is a bit rushed and dismissive. That's not my intention - I'm v supportive of the recommendations that have now been carefully and nicely worded. I will give more detailed comments, but these are the MSBs. 1) My main concern: The two halves of the document seemed nearly unrelated (at least in draft-03 and it looks like draft-04 hasn't changed this). The first half (Sections 1,2,3) framed the problem as primarily about preventing congestion collapse and preventing flow-unfairness, while the recommendations (section 4) were about AQM. The irony of this sentence is deliberate. I had few concerns about the recommendations text (section 4), which we've all been focusing on, including me. But I hadn't realised the introductory text was so out of kilter with the recommendations. Sections 1,2 and 3 seemed to focus on problems that I wouldn't even address with AQM (from a quick scan it looks like these sections haven't changed in this respect for draft-04): a) Congestion collapse: An AQM cannot prevent congestion collapse - that is the job of congestion control and, failing that, of policing. Even isolation (e.g. flow separation) doesn't prevent congestion collapse, because collapse is caused by the load from new flow arrivals exceeding the ability of the system to serve and clear load from existing flows, most likely because many existing flows are not sufficiently responsive to congestion, so retransmissions dominate over goodput (even if each unresponsive flow is in an isolated silo). Flow separation doesn't help when the problem is too many flows. b) Flow fairness (or user-fairness etc): this is a policy issue that needs to be built in a modular way, for optional addition to AQM. Therefore an AQM must also work well without fairness mechanisms. This conclusion was actually reached in the early sections, but it's not carried forward into the recommendations in section 4. If the conclusion is that AQM isn't intended to solve these two problems, we need to clearly say so. Most people who need to read this will be confused, so we shouldn't confuse them further! 2) There's no statement of scope. Can we really make all these recommendations irrespective of whether we're talking about high stat-mux core links, low stat-mux access links, low- stat-mux data centre links, or host buffers? Are there different recommendations for edge links (on trust boundaries) vs interior links? Does AQM apply at L2 as well as L3 (of course it does)? Which recommendations are different for each layer? Does AQM apply for middleboxes (firewalls, NATs etc) as much as for switches and routers? If not why not (only need AQM if there can be queuing - perhaps due to processor overload)? To illustrate the problem, our goal should be AQM in every buffer. But we really don't need and shouldn't have policing or isolation in every buffer. 4) Because sections 1,2,3 focused heavily on the above two problems (collapse and fairness) that can't really be addressed by AQM, these sections also gave insufficient attention to problems that AQM does address (and should address), E.g.: * synchronisation and lock-out were both described as vaguely the same problem, * synchronisation wasn't explained, * lock-out wasn't explained but
Re: [aqm] [AQM Evaluation Guidelines]
See comments in line Shouldn't we more concentrate on what we expect from a good AQM? I'd love to... The only thing we might expect from an AQM is to prevent greedy TCP sources from drawing buffers permanently towards full state, I'm not the least bit sure that's possible. :^( However, it might be possible to separate the queue(s) of greedy TCP flows from other queues. There's a lot of room for discussion about how to do that... You are talking about scheduling, not AQM. I agree with you that perfect scheduling everywhere could obsolete any AQM. But as long ... instead of the much better almost empty state, Consider the class of real-time flows, which hardly ever want to stand in line for more than a millisecond or two. (They can generate redundancy to compensate for packet loss, or ideally have ECN-marked packets get through quickly.) while maintaining full link utilization. That's a concern of _some_ traffic, but not real-time flows. That's a concern for _all_ traffic if _some_ greedy traffic is involved, no matter if real-time flows are there, too. In all other situations, I'm convinced, AQM cannot improve a lot. All other situations covers a lot of ground... :^( And we may not all mean the same thing by greedy TCP. Standard TCPs probe for additional capacity if they have anything to send. I consider that enough to call them greedy; but I'm sure others don't agree. I mean with greedy TCP a flow that is probing for additional capacity _at_this_ queue. Many TCP flows are bandwidth restricted elsewhere (up- or downstream), and as such not greedy to my understanding. But at least, in these cases, it should not make things worse. Make things worse is perhaps insufficiently defined. Any delay at all makes things worse for real-time traffic. Failure to discover unused capacity makes things worse for TCP. That's what I meant. Of course the greedy TCP case can be overlaid by others (unresponsive flows, application paced flows, a renewal process of short lived flows, synchronized start of flows etc.etc.), These cover quite different situations... Unresponsive flows are a problem iff we buffer too much/many of them. Unresponsive flows are a problem if they absorb the AQM's drop/mark signals, so that the responsive flows miss these and continue to grow their sending rates. Application-paced flows are a problem only if they're probing for increased capacity. Revewal flows are a problem if they guess badly at remaining capacity. You missed my point: A renewal process of TCP flows is not a problem for AQM. But it is a problem for AQM _evaluation_. The 100% capacity utilization cannot be reached, no matter how good or bad the AQM is. The capacity utilization is solely defined by the traffic definition and not by the AQM. Nevertheless we have such traffic in reality. If we want to evaluate this, we first should be clear what we expect. and we should take this into account. But we should not expect a lot if those others dominate the scenario. I speculate that none of these would be a problem if we limited the extent which we buffer them. -- John Leslie j...@jlc.net Wolfram ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm
[aqm] new I-D on Global Synchronization Protection for Packet Queues
Dear all, I submitted an I-D with a proposal for a particularly light weight AQM scheme that is solely focused on suppression of global synchronization. Despite its simplicity it is able to reduce the queuing size, delay, and jitter by an order of magnitude and more, if many flows are competing for a bottleneck. If possible I would like to present the theoretical rationale behind the proposal and experimental results with a Linux qdisc implementation on the AQM WG meeting at IETF 89 in London. URL: https://datatracker.ietf.org/doc/draft-lauten-aqm-gsp/ Wolfram Lautenschlaeger ___ aqm mailing list aqm@ietf.org https://www.ietf.org/mailman/listinfo/aqm