Re: [Bloat] the future belongs to pacing

2020-07-06 Thread Roland Bless
Hi Matt and Jamshid,

On 04.07.20 at 19:29 Matt Mathis via Bloat wrote:

> Key takeaway: pacing is inevitable, because it saves large content
> providers money (more efficient use of the most expensive silicon in
> the data center, the switch buffer memory), however to use pacing we
> walk away from 30 years of experience with TCP self clock, which is
> the foundation of all of our CC research

Thanks for the interesting read. I have a few comments:

  * IMO, many of the mentioned problems are related to using packet loss
as congestion signal rather than self-clocking.

  * In principle, one can keep utilization high and queuing delay low
with a congestion window based and ACK-clock
driven approach (see TCP LoLa
https://ieeexplore.ieee.org/document/8109356). However, it currently
lacks
heuristics to deal with stretch/aggregated ACKs, but I think one can
extend this like already done in BBR.

  * Pacing is really useful and I think it is important to keep sending
in case the ACK-clock is distorted
by the mentioned problems, but only for a limited time. If one's
estimate for the correct sending rate
is too high, the amount of inflight data increases over time, which
leads to queuing delay and/or loss.
So having the inflight cap as in BBRv1 is also an important safety
measure.

  * "The maximum receive rate is probed by sending at 125% of max_BW .
If the network is already full and flows have reached their fair share,
the observed max_BW won’t change."
This assumption isn't true if several flows are present at the
bottleneck.
If a flow sends with 1.25*max_BW on the saturated link, *the observed**
**max_BW will change* (unless all flows are probing at the same
time) because the probing flow preempts other flows, thereby
reducing their current share. Together with the applied max-filter
this is the reason why BBRv1 is constantly overestimating the available
capacity and thus persistently increasing the amount inflight data
until the inflight cap is hit. The math is in [32] (section 3) of your
references. Luckily BBRv2 has much more safeguards built-in.

  * "The lower queue occupancy indicates that it is not generally taking
capacity away from other transport protocols..."
I think that this indication is not very robust, e.g., it may hold
in case
there isn't significant packet loss observed. Observing an overall
lower buffer occupancy does not necessarily tell you something about
the individual flow shares. In BBRv1 you could have starving Cubic
flows, because they were backing-off due to loss, while BBR kept
sending.

  * Last but not least, even BBR requires an ACK stream as
feedback in order to estimate the delivery rate. But it is actually
not self-clocked and keeps sending "blindly" for a while. This is
quite useful to deal with the mentioned stretch/aggregated ACKs,
if done with care.

Regards,
 Roland


___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-05 Thread Matt Mathis via Bloat
--- Begin Message ---
What the complexity buys you is that BBRs metrics max_BW, min_RTT, and the
ACK aggregation/batching metrics are actual parameters of the network, and
observable with passive instrumentation of the packet streams.  Traditional
CC is a collection of heuristics to estimate cwnd, which has a clear
interpretation in terms of action (when to send), but the optimal cwnd
can't easily be observed from the packet stream.

I think this alone will have impact, in terms of being able to reason about
CC behaviors.

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
   however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of
control;
too weak risks being mistaken for tacit approval.


On Sun, Jul 5, 2020 at 11:13 AM Jonathan Morton 
wrote:

> > On 5 Jul, 2020, at 9:09 pm, Stephen Hemminger <
> step...@networkplumber.org> wrote:
> >
> > I keep wondering how BBR will respond to intermediaries that aggregate
> packets.
> > At higher speeds, won't packet trains happen and would it not get
> confused
> > by this? Or is its measurement interval long enough that it doesn't
> matter.
>
> Up-thread, there was mention of patches related to wifi.  Aggregation is
> precisely one of the things that would address.  I should note that the
> brief description I gave glossed over a lot of fine details of BBR's
> implementation, which include careful filtering and conditioning of the
> data it gathers about the network path.
>
> I'm not altogether a fan of such complexity.
>
>  - Jonathan Morton
>
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--- End Message ---
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-05 Thread Jonathan Morton
> On 5 Jul, 2020, at 9:09 pm, Stephen Hemminger  
> wrote:
> 
> I keep wondering how BBR will respond to intermediaries that aggregate 
> packets.
> At higher speeds, won't packet trains happen and would it not get confused
> by this? Or is its measurement interval long enough that it doesn't matter.

Up-thread, there was mention of patches related to wifi.  Aggregation is 
precisely one of the things that would address.  I should note that the brief 
description I gave glossed over a lot of fine details of BBR's implementation, 
which include careful filtering and conditioning of the data it gathers about 
the network path.

I'm not altogether a fan of such complexity.

 - Jonathan Morton

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-05 Thread Stephen Hemminger
On Sun, 05 Jul 2020 13:43:27 -0400
Michael Richardson  wrote:

> Sebastian Moeller  wrote:
> > of the sending rate, no? BBRv2 as I understand it will happily run
> > roughshod over any true rfc3168 AQM on the path, I do not have the
> > numbers, but I am not fully convinced that typically the most
> > significant throttling on a CDN to end-user path happens still inside
> > the CDN's data center...  
> 
> That's an interesting claim. I'm in no position to defend or refute it.
> 
> If it's true, though, it suggests some interesting solutions, because one can
> more easily establish trust relationships within the data-center.
> 
> I'm specifically imagining a clock signal from the Top-of-Rack switch to the
> senders.
> 
> Actually, it's not a clock, so much as a automotive style timing shaft
> running down through all the 1-U servers, with fine vernier adjustments :-)
> I'm also certain we've seen such technology before.
> 
> --
> ]   Never tell me the odds! | ipv6 mesh networks [
> ]   Michael Richardson, Sandelman Software Works|IoT architect   [
> ] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails
> [
> 
> 
> 
> 

I keep wondering how BBR will respond to intermediaries that aggregate packets.
At higher speeds, won't packet trains happen and would it not get confused
by this? Or is its measurement interval long enough that it doesn't matter.


pgpr27vG3SeKd.pgp
Description: OpenPGP digital signature
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-05 Thread Michael Richardson

Sebastian Moeller  wrote:
> of the sending rate, no? BBRv2 as I understand it will happily run
> roughshod over any true rfc3168 AQM on the path, I do not have the
> numbers, but I am not fully convinced that typically the most
> significant throttling on a CDN to end-user path happens still inside
> the CDN's data center...

That's an interesting claim. I'm in no position to defend or refute it.

If it's true, though, it suggests some interesting solutions, because one can
more easily establish trust relationships within the data-center.

I'm specifically imagining a clock signal from the Top-of-Rack switch to the
senders.

Actually, it's not a clock, so much as a automotive style timing shaft
running down through all the 1-U servers, with fine vernier adjustments :-)
I'm also certain we've seen such technology before.

--
]   Never tell me the odds! | ipv6 mesh networks [
]   Michael Richardson, Sandelman Software Works|IoT architect   [
] m...@sandelman.ca  http://www.sandelman.ca/|   ruby on rails[






signature.asc
Description: PGP signature
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-05 Thread Sebastian Moeller
Hi Matt,


> On Jul 5, 2020, at 19:07, Matt Mathis  wrote:
> 
> The consensus in the standards community is that 3168 ECN is not so useful - 
> too late to protect small queues, too much signal (gain) to use it to hint at 
> future congestion.  

I follow the discussion in the tsw working group and believe I have a 
good overview of the state of the discussion. I also have gathered some 
experience in the bufferbloat effort to be able to realize that the L4S 
proposal is mostly based on wishful thinking than on solid engineering. But 
yes, the time seems ripe for 1/p-type congestion signaling, but how to do this 
seems an open question.


>  The point of non-3168 ECN is to permit earlier gentle signalling.   I am not 
> following the ECN conversation, but as stated at recent IETFs, the ECN code 
> in BBRv2 is really a placeholder, and when the ECN community comes to 
> consensus on a standard, I would expect BBR to do the standard.

I respectfully argue that this is the wrong way around, first implement 
the current RFC standard aka rfc3168 and only if there is a new standard switch 
over to that. ATM BBRv seems to bank on the L4S proposals to sail through the 
IETF completely ignoring the lack of critical testing the L4S design has been 
treated to.


> 
> Tor has its own special challenge with traffic management.  

Sorry, TOR was intended to expand to Top Of Rack, which I assumed to be 
a common way to call the devices that house "the most expensive silicon in the 
data center, the switch buffer memory". I apologize for not being clear.


> Easy solutions leak information, secure solutions are very hard.   Remember 
> to be useful, the ECN bits need to be in the clear.

All good points, thanks, but more applicable to the onion router and to 
top of rack switches...

Best Regards
Sebastian

> 
> Thanks,
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
> 
> We must not tolerate intolerance;
>however our response must be carefully measured: 
> too strong would be hypocritical and risks spiraling out of 
> control;
> too weak risks being mistaken for tacit approval.
> 
> 
> On Sun, Jul 5, 2020 at 5:01 AM Sebastian Moeller  wrote:
> Hi Matt,
> 
> 
> 
> > On Jul 5, 2020, at 08:10, Matt Mathis  wrote:
> > 
> > I strongly suggest that people (re)read VJ88 - I do every couple of years, 
> > and still discover things that I overlooked on previous readings.
> 
> I promise to read it. And before I give the wrong impression and for 
> what it is worth*, I consider BBR (even v1) an interesting and important 
> evolutionary step and agree that "pacing" is a gentler approach then bursting 
> a full CWN into a link.
> 
> 
> > 
> > All of the negative comments about BBR and loss, ECN marks,
> 
> As far as I can tell, BBRv2 aims for a decidedly non-rfc3168 response 
> to CE-marks. This IMHO is not a clear cut case of meaningfully addressing my 
> ECN comment. In the light of efficiently using TOR? switch buffers 
> efficiently, that kind of response might be defensible but it does not really 
> address my remark about it being unfortunate that BBR ignores both immediate 
> signals of congestion, (sparse) packet drops AND explicit CE marks, the 
> proposed (dctcp-like) CE-response seems rather weak compared to the naive 
> expectation of halving/80%-ing of the sending rate, no? BBRv2 as I understand 
> it will happily run roughshod over any true rfc3168 AQM on the path, I do not 
> have the numbers, but I am not fully convinced that typically the most 
> significant throttling on a CDN to end-user path happens still inside the 
> CDN's data center... 
> 
> 
> > or unfairness to cubic were correct for BBRv1 but have been addressed in 
> > BBRv2.
> 
> I am not sure that unfairness was brought up as an issue in this 
> thread.
> 
> 
> > 
> > My paper has a synopsis of BBR, which is intended to get people started.   
> > See the references in the paper for more info:
> 
> I will have a look at these as well... Thanks
> 
> Best Regards
> Sebastian
> 
> *) Being from outside the field, probably not much...
> 
> > 
> > [12] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, 
> > and Van Jacobson. 2016. BBR: Congestion-Based Congestion Control. Queue 14, 
> > 5, Pages 50 (October 2016). DOI: https://doi.org/10.1145/3012426.3022184
> > [13] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, 
> > and Van Jacobson. 2017. BBR: Congestion-Based Congestion Control. Commun. 
> > ACM 60, 2 (January 2017), 58-66. DOI: https://doi.org/10.1145/3009824
> > [22] google/bbr. 2019. GitHub repository, retrieved 
> > https://github.com/google/bbr
> > 
> > Key definitions: self clocked: data is triggered by ACKs.  All screwy 
> > packet and ACK scheduling in the network is reflected back into the network 
> > on the next RTT.
> > 
> > Paced: data is transmitted on a timer, in

Re: [Bloat] the future belongs to pacing

2020-07-05 Thread Matt Mathis via Bloat
--- Begin Message ---
The consensus in the standards community is that 3168 ECN is not so useful
- too late to protect small queues, too much signal (gain) to use it
to hint at future congestion.The point of non-3168 ECN is to permit
earlier gentle signalling.   I am not following the ECN conversation, but
as stated at recent IETFs, the ECN code in BBRv2 is really a placeholder,
and when the ECN community comes to consensus on a standard, I would expect
BBR to do the standard.

Tor has its own special challenge with traffic management.   Easy solutions
leak information, secure solutions are very hard.   Remember to be useful,
the ECN bits need to be in the clear.

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
   however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of
control;
too weak risks being mistaken for tacit approval.


On Sun, Jul 5, 2020 at 5:01 AM Sebastian Moeller  wrote:

> Hi Matt,
>
>
>
> > On Jul 5, 2020, at 08:10, Matt Mathis  wrote:
> >
> > I strongly suggest that people (re)read VJ88 - I do every couple of
> years, and still discover things that I overlooked on previous readings.
>
> I promise to read it. And before I give the wrong impression and
> for what it is worth*, I consider BBR (even v1) an interesting and
> important evolutionary step and agree that "pacing" is a gentler approach
> then bursting a full CWN into a link.
>
>
> >
> > All of the negative comments about BBR and loss, ECN marks,
>
> As far as I can tell, BBRv2 aims for a decidedly non-rfc3168
> response to CE-marks. This IMHO is not a clear cut case of meaningfully
> addressing my ECN comment. In the light of efficiently using TOR? switch
> buffers efficiently, that kind of response might be defensible but it does
> not really address my remark about it being unfortunate that BBR ignores
> both immediate signals of congestion, (sparse) packet drops AND explicit CE
> marks, the proposed (dctcp-like) CE-response seems rather weak compared to
> the naive expectation of halving/80%-ing of the sending rate, no? BBRv2 as
> I understand it will happily run roughshod over any true rfc3168 AQM on the
> path, I do not have the numbers, but I am not fully convinced that
> typically the most significant throttling on a CDN to end-user path happens
> still inside the CDN's data center...
>
>
> > or unfairness to cubic were correct for BBRv1 but have been addressed in
> BBRv2.
>
> I am not sure that unfairness was brought up as an issue in this
> thread.
>
>
> >
> > My paper has a synopsis of BBR, which is intended to get people
> started.   See the references in the paper for more info:
>
> I will have a look at these as well... Thanks
>
> Best Regards
> Sebastian
>
> *) Being from outside the field, probably not much...
>
> >
> > [12] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas
> Yeganeh, and Van Jacobson. 2016. BBR: Congestion-Based Congestion Control.
> Queue 14, 5, Pages 50 (October 2016). DOI:
> https://doi.org/10.1145/3012426.3022184
> > [13] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas
> Yeganeh, and Van Jacobson. 2017. BBR: Congestion-Based Congestion Control.
> Commun. ACM 60, 2 (January 2017), 58-66. DOI:
> https://doi.org/10.1145/3009824
> > [22] google/bbr. 2019. GitHub repository, retrieved
> https://github.com/google/bbr
> >
> > Key definitions: self clocked: data is triggered by ACKs.  All screwy
> packet and ACK scheduling in the network is reflected back into the network
> on the next RTT.
> >
> > Paced: data is transmitted on a timer, independent of ACK arrivals (as
> long as the ACKs take less than twice the measured minRTT).  Thus in bulk
> transport there is little or no correlation between data transmissions and
> events elsewhere in the network.
> >
> > Clarification about my earlier WiFi comment:  The BBRv1 WiFi fix missed
> 4.19 LTS, so bad results are "expected" for many distros.  If you want to
> do useful experiments, you must read https://groups.google.com/g/bbr-dev/
> and start from BBRv2 in [22].
> >
> > Thanks,
> > --MM--
> > The best way to predict the future is to create it.  - Alan Kay
> >
> > We must not tolerate intolerance;
> >however our response must be carefully measured:
> > too strong would be hypocritical and risks spiraling out of
> control;
> > too weak risks being mistaken for tacit approval.
> >
> >
> > On Sat, Jul 4, 2020 at 11:29 AM Sebastian Moeller 
> wrote:
> >
> >
> > > On Jul 4, 2020, at 19:52, Daniel Sterling 
> wrote:
> > >
> > > On Sat, Jul 4, 2020 at 1:29 PM Matt Mathis via Bloat
> > >  wrote:
> > > "pacing is inevitable, because it saves large content providers money
> > > (more efficient use of the most expensive silicon in the data center,
> > > the switch buffer memory), however to use pacing we walk away from 30
> 

Re: [Bloat] the future belongs to pacing

2020-07-05 Thread Sebastian Moeller
Hi Matt,



> On Jul 5, 2020, at 08:10, Matt Mathis  wrote:
> 
> I strongly suggest that people (re)read VJ88 - I do every couple of years, 
> and still discover things that I overlooked on previous readings.

I promise to read it. And before I give the wrong impression and for 
what it is worth*, I consider BBR (even v1) an interesting and important 
evolutionary step and agree that "pacing" is a gentler approach then bursting a 
full CWN into a link.


> 
> All of the negative comments about BBR and loss, ECN marks,

As far as I can tell, BBRv2 aims for a decidedly non-rfc3168 response 
to CE-marks. This IMHO is not a clear cut case of meaningfully addressing my 
ECN comment. In the light of efficiently using TOR? switch buffers efficiently, 
that kind of response might be defensible but it does not really address my 
remark about it being unfortunate that BBR ignores both immediate signals of 
congestion, (sparse) packet drops AND explicit CE marks, the proposed 
(dctcp-like) CE-response seems rather weak compared to the naive expectation of 
halving/80%-ing of the sending rate, no? BBRv2 as I understand it will happily 
run roughshod over any true rfc3168 AQM on the path, I do not have the numbers, 
but I am not fully convinced that typically the most significant throttling on 
a CDN to end-user path happens still inside the CDN's data center... 


> or unfairness to cubic were correct for BBRv1 but have been addressed in 
> BBRv2.

I am not sure that unfairness was brought up as an issue in this thread.


> 
> My paper has a synopsis of BBR, which is intended to get people started.   
> See the references in the paper for more info:

I will have a look at these as well... Thanks

Best Regards
Sebastian

*) Being from outside the field, probably not much...

> 
> [12] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, 
> and Van Jacobson. 2016. BBR: Congestion-Based Congestion Control. Queue 14, 
> 5, Pages 50 (October 2016). DOI: https://doi.org/10.1145/3012426.3022184
> [13] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh, 
> and Van Jacobson. 2017. BBR: Congestion-Based Congestion Control. Commun. ACM 
> 60, 2 (January 2017), 58-66. DOI: https://doi.org/10.1145/3009824
> [22] google/bbr. 2019. GitHub repository, retrieved 
> https://github.com/google/bbr
> 
> Key definitions: self clocked: data is triggered by ACKs.  All screwy packet 
> and ACK scheduling in the network is reflected back into the network on the 
> next RTT.
> 
> Paced: data is transmitted on a timer, independent of ACK arrivals (as long 
> as the ACKs take less than twice the measured minRTT).  Thus in bulk 
> transport there is little or no correlation between data transmissions and 
> events elsewhere in the network. 
> 
> Clarification about my earlier WiFi comment:  The BBRv1 WiFi fix missed 4.19 
> LTS, so bad results are "expected" for many distros.  If you want to do 
> useful experiments, you must read https://groups.google.com/g/bbr-dev/ and 
> start from BBRv2 in [22].
> 
> Thanks,
> --MM--
> The best way to predict the future is to create it.  - Alan Kay
> 
> We must not tolerate intolerance;
>however our response must be carefully measured: 
> too strong would be hypocritical and risks spiraling out of 
> control;
> too weak risks being mistaken for tacit approval.
> 
> 
> On Sat, Jul 4, 2020 at 11:29 AM Sebastian Moeller  wrote:
> 
> 
> > On Jul 4, 2020, at 19:52, Daniel Sterling  wrote:
> > 
> > On Sat, Jul 4, 2020 at 1:29 PM Matt Mathis via Bloat
> >  wrote:
> > "pacing is inevitable, because it saves large content providers money
> > (more efficient use of the most expensive silicon in the data center,
> > the switch buffer memory), however to use pacing we walk away from 30
> > years of experience with TCP self clock"
> > 
> > at the risk of asking w/o doing any research,
> > 
> > could someone explain this to a lay person or point to a doc talking
> > about this more?
> > 
> > What does BBR do that's different from other algorithms?
> 
> Well, it does not believe the network (blindly), that is currently it 
> ignores both ECN marks and (sparse) drops as signs of congestion, instead it 
> uses its own rate estimates to set its send rate and cyclically will 
> re-assess its rate estimate. Sufficiently severe drops will be honored. IMHO 
> a somewhat risky approach, that works reasonably well, as often sparse drops 
> are not real signs of congestion but just random drops of say a wifi link 
> (that said, these drops on wifi typically also cause painful latency spikes 
> as wifi often takes heroic measures in attempting retransmitting for several 
> 100s of milliseconds).
> 
> 
> > Why does it
> > break the clock?
> 
> One can argue that there is no real clock to break. TCP gates the 
> release on new packets on the reception of ACK signals from the receiver, 
> this is only a clock, if one 

Re: [Bloat] the future belongs to pacing

2020-07-04 Thread Matt Mathis via Bloat
--- Begin Message ---
I strongly suggest that people (re)read VJ88 - I do every couple of years,
and still discover things that I overlooked on previous readings.

All of the negative comments about BBR and loss, ECN marks, or unfairness
to cubic were correct for BBRv1 but have been addressed in BBRv2.

My paper has a synopsis of BBR, which is intended to get people started.
 See the references in the paper for more info:

[12] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh,
and Van Jacobson. 2016. BBR: Congestion-Based Congestion Control. Queue 14,
5, Pages 50 (October 2016). DOI: https://doi.org/10.1145/3012426.3022184
[13] Neal Cardwell, Yuchung Cheng, C. Stephen Gunn, Soheil Hassas Yeganeh,
and Van Jacobson. 2017. BBR: Congestion-Based Congestion Control. Commun.
ACM 60, 2 (January 2017), 58-66. DOI: https://doi.org/10.1145/3009824
[22] google/bbr. 2019. GitHub repository, retrieved
https://github.com/google/bbr

Key definitions: self clocked: data is triggered by ACKs.  All screwy
packet and ACK scheduling in the network is reflected back into the network
on the next RTT.

Paced: data is transmitted on a timer, independent of ACK arrivals (as long
as the ACKs take less than twice the measured minRTT).  Thus in bulk
transport there is little or no correlation between data transmissions and
events elsewhere in the network.

Clarification about my earlier WiFi comment:  The BBRv1 WiFi fix missed
4.19 LTS, so bad results are "expected" for many distros.  If you want to
do useful experiments, you must read https://groups.google.com/g/bbr-dev/ and
start from BBRv2 in [22].

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
   however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of
control;
too weak risks being mistaken for tacit approval.


On Sat, Jul 4, 2020 at 11:29 AM Sebastian Moeller  wrote:

>
>
> > On Jul 4, 2020, at 19:52, Daniel Sterling 
> wrote:
> >
> > On Sat, Jul 4, 2020 at 1:29 PM Matt Mathis via Bloat
> >  wrote:
> > "pacing is inevitable, because it saves large content providers money
> > (more efficient use of the most expensive silicon in the data center,
> > the switch buffer memory), however to use pacing we walk away from 30
> > years of experience with TCP self clock"
> >
> > at the risk of asking w/o doing any research,
> >
> > could someone explain this to a lay person or point to a doc talking
> > about this more?
> >
> > What does BBR do that's different from other algorithms?
>
> Well, it does not believe the network (blindly), that is currently
> it ignores both ECN marks and (sparse) drops as signs of congestion,
> instead it uses its own rate estimates to set its send rate and cyclically
> will re-assess its rate estimate. Sufficiently severe drops will be
> honored. IMHO a somewhat risky approach, that works reasonably well, as
> often sparse drops are not real signs of congestion but just random drops
> of say a wifi link (that said, these drops on wifi typically also cause
> painful latency spikes as wifi often takes heroic measures in attempting
> retransmitting for several 100s of milliseconds).
>
>
> > Why does it
> > break the clock?
>
> One can argue that there is no real clock to break. TCP gates the
> release on new packets on the reception of ACK signals from the receiver,
> this is only a clock, if one does not really care for the equi-temporal
> period property of a real clock. But for better or worse that is the term
> that is used. IMHO (and I really am calling this from way out in the
> left-field) gating would be a better term, but changing the nomenclature
> probably is not an option at this point.
>
> > Before BBR, was the clock the only way TCP did CC?
>
> No, TCP also interpreted a drop (or rather 3 duplicated ACKs) as
> signal of congestion and hit the brakes, by halving the congestion window
> (the amount of data that could be in flight unacknowledged, which roughly
> correlates with the send rate, if averaged over long enough time windows).
> BBR explicitly does not do this unless it really is convinced that someone
> dropped multiple packets purposefully to signal congestion.
> In practice it works rather well, in theory it could do with at
> least an rfc3168 compliant response to ECN marks (which an AQM uses to
> explicitly signal congestion, unlike a drop an ECN mark is really
> unambiguous, some hop on the way "told" the flow slow down).
>
>
> >
> > Also,
> >
> > I have UBNT "Amplifi" HD wifi units in my house. (HD units only; none
> > of the "mesh" units. Just HD units connected either via wifi or
> > wired.) Empirically, I've found that in order to reduce latency, I
> > need to set cake to about 1/4 of the total possible wifi speed;
> > otherwise if a large download comes down from my internet link, that
> > flow causes latency.
> >
> > That is, if I'm

Re: [Bloat] the future belongs to pacing

2020-07-04 Thread Sebastian Moeller


> On Jul 4, 2020, at 19:52, Daniel Sterling  wrote:
> 
> On Sat, Jul 4, 2020 at 1:29 PM Matt Mathis via Bloat
>  wrote:
> "pacing is inevitable, because it saves large content providers money
> (more efficient use of the most expensive silicon in the data center,
> the switch buffer memory), however to use pacing we walk away from 30
> years of experience with TCP self clock"
> 
> at the risk of asking w/o doing any research,
> 
> could someone explain this to a lay person or point to a doc talking
> about this more?
> 
> What does BBR do that's different from other algorithms?

Well, it does not believe the network (blindly), that is currently it 
ignores both ECN marks and (sparse) drops as signs of congestion, instead it 
uses its own rate estimates to set its send rate and cyclically will re-assess 
its rate estimate. Sufficiently severe drops will be honored. IMHO a somewhat 
risky approach, that works reasonably well, as often sparse drops are not real 
signs of congestion but just random drops of say a wifi link (that said, these 
drops on wifi typically also cause painful latency spikes as wifi often takes 
heroic measures in attempting retransmitting for several 100s of milliseconds).


> Why does it
> break the clock?

One can argue that there is no real clock to break. TCP gates the 
release on new packets on the reception of ACK signals from the receiver, this 
is only a clock, if one does not really care for the equi-temporal period 
property of a real clock. But for better or worse that is the term that is 
used. IMHO (and I really am calling this from way out in the left-field) gating 
would be a better term, but changing the nomenclature probably is not an option 
at this point.

> Before BBR, was the clock the only way TCP did CC?

No, TCP also interpreted a drop (or rather 3 duplicated ACKs) as signal 
of congestion and hit the brakes, by halving the congestion window (the amount 
of data that could be in flight unacknowledged, which roughly correlates with 
the send rate, if averaged over long enough time windows). BBR explicitly does 
not do this unless it really is convinced that someone dropped multiple packets 
purposefully to signal congestion.
In practice it works rather well, in theory it could do with at least 
an rfc3168 compliant response to ECN marks (which an AQM uses to explicitly 
signal congestion, unlike a drop an ECN mark is really unambiguous, some hop on 
the way "told" the flow slow down).


> 
> Also,
> 
> I have UBNT "Amplifi" HD wifi units in my house. (HD units only; none
> of the "mesh" units. Just HD units connected either via wifi or
> wired.) Empirically, I've found that in order to reduce latency, I
> need to set cake to about 1/4 of the total possible wifi speed;
> otherwise if a large download comes down from my internet link, that
> flow causes latency.
> 
> That is, if I'm using 5ghz at 20mhz channel width, I need to set
> cake's bandwidth argument to 40mbits to prevent video streams /
> downloads from impacting latency for any other stream. This is w/o any
> categorization at all; no packet marking based on port or anything
> else; cake set to "best effort".
> 
> Anything higher and when a large amount of data comes thru, something
> (assumedly the buffer in the Amplifi HD units) causes 100s of
> milliseconds of latency.
> 
> Can anyone speak to how BBR would react to this? My ISP is full
> gigabit; but cake is going to drop a lot of packets as it throttles
> that down to 40mbit before it sends the packets to the wifi AP.
> 
> Thanks,
> Dan
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-04 Thread Jonathan Morton
> On 4 Jul, 2020, at 8:52 pm, Daniel Sterling  wrote:
> 
> could someone explain this to a lay person or point to a doc talking
> about this more?
> 
> What does BBR do that's different from other algorithms? Why does it
> break the clock? Before BBR, was the clock the only way TCP did CC?

Put simply, BBR directly probes for the capacity and baseline latency of the 
path, and picks a send rate (implemented using pacing) and a failsafe cwnd to 
match that.  The bandwidth probe looks at the rate of returning acks, so in 
fact it's still using the ack-clock mechanism, it's just connected much less 
directly to the send rate than before.

Other TCPs can use pacing as well.  In that case the cwnd and RTT estimate are 
calculated in the normal way, and the send rate (for pacing) is calculated from 
those.  It prevents a sudden opening of the receive or congestion windows from 
causing a huge burst which would tend to swamp buffers.

 - Jonathan Morton

___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-04 Thread Daniel Sterling
On Sat, Jul 4, 2020 at 1:29 PM Matt Mathis via Bloat
 wrote:
"pacing is inevitable, because it saves large content providers money
(more efficient use of the most expensive silicon in the data center,
the switch buffer memory), however to use pacing we walk away from 30
years of experience with TCP self clock"

at the risk of asking w/o doing any research,

could someone explain this to a lay person or point to a doc talking
about this more?

What does BBR do that's different from other algorithms? Why does it
break the clock? Before BBR, was the clock the only way TCP did CC?

Also,

I have UBNT "Amplifi" HD wifi units in my house. (HD units only; none
of the "mesh" units. Just HD units connected either via wifi or
wired.) Empirically, I've found that in order to reduce latency, I
need to set cake to about 1/4 of the total possible wifi speed;
otherwise if a large download comes down from my internet link, that
flow causes latency.

That is, if I'm using 5ghz at 20mhz channel width, I need to set
cake's bandwidth argument to 40mbits to prevent video streams /
downloads from impacting latency for any other stream. This is w/o any
categorization at all; no packet marking based on port or anything
else; cake set to "best effort".

Anything higher and when a large amount of data comes thru, something
(assumedly the buffer in the Amplifi HD units) causes 100s of
milliseconds of latency.

Can anyone speak to how BBR would react to this? My ISP is full
gigabit; but cake is going to drop a lot of packets as it throttles
that down to 40mbit before it sends the packets to the wifi AP.

Thanks,
Dan
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat


Re: [Bloat] the future belongs to pacing

2020-07-04 Thread Matt Mathis via Bloat
--- Begin Message ---
Be aware that BBR is a moving target.   There was an important WiFi fix
that went into BBRv1 in Jan 2019 that didn't make it into lots of
distros...   BBRv2 is in the wings and fixes (nearly?) all sharing issues,
but isn't done yet.

Key takeaway: pacing is inevitable, because it saves large content
providers money (more efficient use of the most expensive silicon in the
data center, the switch buffer memory), however to use pacing we walk away
from 30 years of experience with TCP self clock, which is the foundation of
all of our CC research

Thanks,
--MM--
The best way to predict the future is to create it.  - Alan Kay

We must not tolerate intolerance;
   however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of
control;
too weak risks being mistaken for tacit approval.


On Fri, Dec 13, 2019 at 1:25 PM Dave Taht  wrote:

> and everything we know about the tcp macroscopic model, is obsolete,
> according to a  provocative paper by matt mathis and Jamshid Mahdavi
> in sigcomm.
>
> https://ccronline.sigcomm.org/wp-content/uploads/2019/10/acmdl19-323.pdf
>
>
>
>
> On Fri, Dec 13, 2019 at 1:05 PM Carlo Augusto Grazia
>  wrote:
> >
> > Hi Dave,
> > thank you for your email!
> > Toke told me about AQL a couple of weeks ago, I definitely want to test
> it ASAP.
> > BBR struggles a lot on Wi-Fi interfaces (ones with aggregation) with
> kernel 4.14 & 4.19.
> > Anyway, it seems that with BBRv2 on new kernels this problem does not
> exist anymore.
> >
> > Best regards
> > Carlo
> >
> > Il giorno ven 13 dic 2019 alle 20:54 Dave Taht  ha
> scritto:
> >>
> >> https://sci-hub.tw/10.1109/WiMOB.2019.8923418
> >>
> >> It predates the aql work, but the bbr result is puzzling.
> >>
> >>
> >> --
> >> Make Music, Not War
> >>
> >> Dave Täht
> >> CTO, TekLibre, LLC
> >> http://www.teklibre.com
> >> Tel: 1-831-435-0729 <(831)%20435-0729>
> >
> > --
> > 
> > Carlo Augusto Grazia, Ph. D.
> > Assistant Professor
> > 
> > Dept. of Engineering "Enzo Ferrari"
> > University of Modena and Reggio Emilia
> > Via Pietro Vivarelli, 10/1 - 41125 - Modena - Italy
> > Building 26, floor 2, room 28
> > Tel.: +39-059-2056323 <+39%20059%20205%206323>
> > email: carloaugusto.gra...@unimore.it
> > Link to my personal home page here
> > 
>
>
>
> --
> Make Music, Not War
>
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-435-0729 <(831)%20435-0729>
> ___
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
--- End Message ---
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat