Hi Luca,

I'm answering on behalf of Roland, since I am a co-author of the paper.

This is an excellent question, since it goes right at the heart of how LoLa works.

Indeed, the paper is a first of a series. A second one, looking deeper into the fair flow balancing mechanism, is currently under submission.

Similar as other delay based congestion controls, LoLa tries to achieve fairness by allowing each flow to buffer the same amount of data at the bottleneck. We have this, e.g., in TCP Vegas, and (in a way) also in Copa (a recently proposed congestion control) and many others. If this is achieved, we get flow rate fairness independent of a flow's RTT.

Usually (in other congestion controls) this "allowed amount of data" is fixed per flow. We presume that this approach does not scale well to high speed networks. Since the queuing delay resulting from this amount of data is reduced with increasing bottleneck rate. Thus, it becomes harder to measure it right. This can easily be seen (and proven) for TCP Vegas.

Note: Just using higher fixed values is not an option, since it would not work at lower speeds anymore and also not with a large number of flows.

Therefore, LoLa tries to find a suitable value for the "allowed amount of data" dynamically. This is X(t).

Our approach is to grow X(t) over time during the Fair Flow Balancing phase. This phase ends when the queuing delay reaches 5ms. Thus, (in the ideal case) at the end of Fair Flow Balancing, X(t) is just as large that all flows at bottleneck create a queuing delay of 5ms, and all flows contribute equally to this queue. Hence, flow rate fairness is achieved. (Note that LoLa is designed in a way that t is (almost) synchronized among the competing flows.)

Generally, other ways of determining a suitable X(t) are conceivable. In our approach X(t) is a monotonically increasing function, but it is regularly reset as LoLa cycles between its states; i.e., after a queuing delay of 5ms is reached, the queue is drained and everything starts again. (Thus, the timespan where X(t) is monotonically increased is called a "round of fair flow balancing".)

This way we can overcome the constraint given in [1]:

"""
THEOREM 6 (FAIRNESS/DELAY TRADEOFF). For congestion control mechanisms that have steady state throughput of the kind R = f(d, p), for some function f, delay d and feedback p, if the feedback is based on purely end to end delay measurements, you can either have fairness or a fixed delay, but not both simultaneously
"""

[1] "ECN or Delay: Lessons Learnt from Analysis of DCQCN and TIMELY"
Yibo Zhu et al., https://dl.acm.org/citation.cfm?id=2999593

Best, Mario


Am 29.11.18 um 17:09 schrieb Luca Muscariello:
Hi Roland,

It took me quite a lot of time to find this message in the thread...
I read the paper you sent and I guess this is the first of a series as many things stay uncovered.

Just a quick question: why is X(t) always increasing with  t?


On Tue, Nov 27, 2018 at 11:26 AM Bless, Roland (TM) <roland.bl...@kit.edu <mailto:roland.bl...@kit.edu>> wrote:

    Hi Luca,

    Am 27.11.18 um 10:24 schrieb Luca Muscariello:
     > A congestion controlled protocol such as TCP or others, including
    QUIC,
     > LEDBAT and so on
     > need at least the BDP in the transmission queue to get full link
     > efficiency, i.e. the queue never empties out.

    This is not true. There are congestion control algorithms
    (e.g., TCP LoLa [1] or BBRv2) that can fully utilize the bottleneck link
    capacity without filling the buffer to its maximum capacity. The BDP
    rule of thumb basically stems from the older loss-based congestion
    control variants that profit from the standing queue that they built
    over time when they detect a loss:
    while they back-off and stop sending, the queue keeps the bottleneck
    output busy and you'll not see underutilization of the link. Moreover,
    once you get good loss de-synchronization, the buffer size requirement
    for multiple long-lived flows decreases.

     > This gives rule of thumbs to size buffers which is also very
    practical
     > and thanks to flow isolation becomes very accurate.

    The positive effect of buffers is merely their role to absorb
    short-term bursts (i.e., mismatch in arrival and departure rates)
    instead of dropping packets. One does not need a big buffer to
    fully utilize a link (with perfect knowledge you can keep the link
    saturated even without a single packet waiting in the buffer).
    Furthermore, large buffers (e.g., using the BDP rule of thumb)
    are not useful/practical anymore at very high speed such as 100 Gbit/s:
    memory is also quite costly at such high speeds...

    Regards,
      Roland

    [1] M. Hock, F. Neumeister, M. Zitterbart, R. Bless.
    TCP LoLa: Congestion Control for Low Latencies and High Throughput.
    Local Computer Networks (LCN), 2017 IEEE 42nd Conference on, pp.
    215-218, Singapore, Singapore, October 2017
    http://doc.tm.kit.edu/2017-LCN-lola-paper-authors-copy.pdf

     > Which is:
     >
     > 1) find a way to keep the number of backlogged flows at a
    reasonable value.
     > This largely depends on the minimum fair rate an application may
    need in
     > the long term.
     > We discussed a little bit of available mechanisms to achieve that
    in the
     > literature.
     >
     > 2) fix the largest RTT you want to serve at full utilization and size
     > the buffer using BDP * N_backlogged.
     > Or the other way round: check how much memory you can use
     > in the router/line card/device and for a fixed N, compute the largest
     > RTT you can serve at full utilization.
     >
     > 3) there is still some memory to dimension for sparse flows in
    addition
     > to that, but this is not based on BDP.
     > It is just enough to compute the total utilization of sparse
    flows and
     > use the same simple model Toke has used
     > to compute the (de)prioritization probability.
     >
     > This procedure would allow to size FQ_codel but also SFQ.
     > It would be interesting to compare the two under this buffer sizing.
     > It would also be interesting to compare another mechanism that we
    have
     > mentioned during the defense
     > which is AFD + a sparse flow queue. Which is, BTW, already
    available in
     > Cisco nexus switches for data centres.
     >
     > I think that the the codel part would still provide the ECN feature,
     > that all the others cannot have.
     > However the others, the last one especially can be implemented in
     > silicon with reasonable cost.


_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to