> On 28. jul. 2015, at 18.22, Sebastian Moeller <moell...@gmx.de> wrote: > > Hi Simon, > > On Jul 28, 2015, at 16:31 , Simon Barber <si...@superduper.net> wrote: > >> The issue is that Codel tries to keep the delay low, and will start dropping >> when sojourn time grows above 5ms for 100ms. For longer RTT links more delay >> is necessary to avoid underutilizing the link. This is due to the >> multiplicative decrease - it's worst with Reno, where the halving of cWind >> means that you need to have a full BDP of data in the buffer to avoid the >> link going idle when cWind is halved. With longer RTTs this means more delay >> than Codel allows is required to avoid a throughput hit. The worst case >> happens when a single flow is controlled, but that can be a common >> situation. My proposal is to sense and have the target value in Codel >> automatically adjust when this worst case scenario happens - which would >> mitigate most of the downside. > > According to theory you should adapt interval if at all and the set > target between 5-10% of that interval. See > https://datatracker.ietf.org/doc/draft-ietf-aqm-codel/?include_text=1 > sections 4.3 and 4.4. Now that could all be off.The upshot is increasing > target as a response for long RTTs will sacrifice latency again for > bandwidth, pretty much that the avoidance of is codel’s claim to fame ;)
The only way to fix this trade-off is to change TCP, to back off less *only when the queue was small*. We tried what we think is the simplest possible way to do this: only a sender-side change, using ECN as an indication that there is an AQM mechanism on the path, not a long queue (and hence backing off by less is ok). It works pretty well. See: http://caia.swin.edu.au/reports/150710A/CAIA-TR-150710A.pdf (a few of the graphs are simulations, but many were from real-life tests - appendix B says which). ...and the draft, which was presented to TCPM at the Prague IETF last week: https://tools.ietf.org/html/draft-khademi-alternativebackoff-ecn Cheers, Michael _______________________________________________ Bloat mailing list Bloat@lists.bufferbloat.net https://lists.bufferbloat.net/listinfo/bloat