> On Jul 4, 2020, at 19:52, Daniel Sterling <sterling.dan...@gmail.com> wrote:
> 
> On Sat, Jul 4, 2020 at 1:29 PM Matt Mathis via Bloat
> <bloat@lists.bufferbloat.net> wrote:
> "pacing is inevitable, because it saves large content providers money
> (more efficient use of the most expensive silicon in the data center,
> the switch buffer memory), however to use pacing we walk away from 30
> years of experience with TCP self clock"
> 
> at the risk of asking w/o doing any research,
> 
> could someone explain this to a lay person or point to a doc talking
> about this more?
> 
> What does BBR do that's different from other algorithms?

        Well, it does not believe the network (blindly), that is currently it 
ignores both ECN marks and (sparse) drops as signs of congestion, instead it 
uses its own rate estimates to set its send rate and cyclically will re-assess 
its rate estimate. Sufficiently severe drops will be honored. IMHO a somewhat 
risky approach, that works reasonably well, as often sparse drops are not real 
signs of congestion but just random drops of say a wifi link (that said, these 
drops on wifi typically also cause painful latency spikes as wifi often takes 
heroic measures in attempting retransmitting for several 100s of milliseconds).


> Why does it
> break the clock?

        One can argue that there is no real clock to break. TCP gates the 
release on new packets on the reception of ACK signals from the receiver, this 
is only a clock, if one does not really care for the equi-temporal period 
property of a real clock. But for better or worse that is the term that is 
used. IMHO (and I really am calling this from way out in the left-field) gating 
would be a better term, but changing the nomenclature probably is not an option 
at this point.

> Before BBR, was the clock the only way TCP did CC?

        No, TCP also interpreted a drop (or rather 3 duplicated ACKs) as signal 
of congestion and hit the brakes, by halving the congestion window (the amount 
of data that could be in flight unacknowledged, which roughly correlates with 
the send rate, if averaged over long enough time windows). BBR explicitly does 
not do this unless it really is convinced that someone dropped multiple packets 
purposefully to signal congestion.
        In practice it works rather well, in theory it could do with at least 
an rfc3168 compliant response to ECN marks (which an AQM uses to explicitly 
signal congestion, unlike a drop an ECN mark is really unambiguous, some hop on 
the way "told" the flow slow down).


> 
> Also,
> 
> I have UBNT "Amplifi" HD wifi units in my house. (HD units only; none
> of the "mesh" units. Just HD units connected either via wifi or
> wired.) Empirically, I've found that in order to reduce latency, I
> need to set cake to about 1/4 of the total possible wifi speed;
> otherwise if a large download comes down from my internet link, that
> flow causes latency.
> 
> That is, if I'm using 5ghz at 20mhz channel width, I need to set
> cake's bandwidth argument to 40mbits to prevent video streams /
> downloads from impacting latency for any other stream. This is w/o any
> categorization at all; no packet marking based on port or anything
> else; cake set to "best effort".
> 
> Anything higher and when a large amount of data comes thru, something
> (assumedly the buffer in the Amplifi HD units) causes 100s of
> milliseconds of latency.
> 
> Can anyone speak to how BBR would react to this? My ISP is full
> gigabit; but cake is going to drop a lot of packets as it throttles
> that down to 40mbit before it sends the packets to the wifi AP.
> 
> Thanks,
> Dan
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to