ember 08, 2016 5:31 PM
To: Neal Cardwell mailto:ncardw...@google.com>>; Mikael
Abrahamsson mailto:swm...@swm.pp.se>>
Cc: bloat mailto:bloat@lists.bufferbloat.net>>
Subject: Re: [Bloat] TCP BBR paper is now generally available
Also we are aware docsis pie is going to be deplo
Also we are aware docsis pie is going to be deployed and we'll specifically
test that scenario. With fq this issue is a lot smaller but we understand
it is not preferred setting in some aqm for other good reasons.
But to set the expectation right, we are not going to make bbr prefectly
flow level
Hi Mikael,
Thanks for your questions. Yes, we do care about how BBR behaves in mixed
environments, and particularly in mixed environments with Reno and CUBIC.
And we are actively working in this and related areas.
For the ACM Queue article we faced very hard and tight word count
constraints, so u
On Thu, 8 Dec 2016, Dave Täht wrote:
drop tail works better than any single queue aqm in this scenario.
*confused*
I see nothing in the BBR paper about how it interoperates with other
TCP algorithms. Your text above didn't help me at all.
How is BBR going to be deployed? Is nobody interest
drop tail works better than any single queue aqm in this scenario.
On 12/8/16 12:24 AM, Mikael Abrahamsson wrote:
> On Fri, 2 Dec 2016, Dave Taht wrote:
>
>> http://queue.acm.org/detail.cfm?id=3022184
>
> "BBR converges toward a fair share of the bottleneck bandwidth whether
> competing with ot
On Fri, 2 Dec 2016, Dave Taht wrote:
http://queue.acm.org/detail.cfm?id=3022184
"BBR converges toward a fair share of the bottleneck bandwidth whether
competing with other BBR flows or with loss-based congestion control."
That's not what I took away from your tests of having BBR and Cubic f
On 07/12/2016, Steinar H. Gunderson wrote:
> On Wed, Dec 07, 2016 at 04:28:15PM +, Alan Jenkins wrote:
>> Since no-one's explicitly mentioned this: be aware that SSH is known for
>> doing application-level windowing, limiting performance.
>>
>> E.g. see https://www.psc.edu/index.php/hpn-ssh/63
On Wed, Dec 07, 2016 at 04:28:15PM +, Alan Jenkins wrote:
> Since no-one's explicitly mentioned this: be aware that SSH is known for
> doing application-level windowing, limiting performance.
>
> E.g. see https://www.psc.edu/index.php/hpn-ssh/638
Hm, I thought this was mainly about scp, not s
On 03/12/16 19:13, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 08:03:50AM -0500, Neal Cardwell wrote:
>>> I have one thing that I _wonder_ if could be BBR's fault: I run
>>> backup over SSH. (That would be tar + gzip + ssh.) The first full
>>> backup after I rolled out BBR on the server
>http://storage.sesse.net/bbr.pcap -- ssh+tar+gnupg
I agree with Eric that for the ssh+tar+gnupg case the ACK stream seems
like the culprit here. After about 1 second, the ACKs are suddenly
very stretched and very delayed (often more than 100ms). See the
attached screen shots.
I like Eric's t
On Tue, Dec 6, 2016 at 12:20 PM, Steinar H. Gunderson <
sgunder...@bigfoot.com> wrote:
> On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> > Wait a minute. If you use fq on the receiver, then maybe your old debian
> > kernel did not backport :
> >
> > https://git.kernel.org/cgit/linu
On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> Wait a minute. If you use fq on the receiver, then maybe your old debian
> kernel did not backport :
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
I upgraded to 4.7
On Sun, Dec 04, 2016 at 09:13:19AM -0800, Eric Dumazet wrote:
> You could turn off pacing , and keep fq.
>
> tc qdisc change dev eth0 root fq nopacing
I don't really care about fair queueing except for pacing :-) But I'll try
upgrading the kernel at some point. The results in turning off fq were
On Sun, 2016-12-04 at 09:44 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> > Wait a minute. If you use fq on the receiver, then maybe your old debian
> > kernel did not backport :
> >
> > https://git.kernel.org/cgit/linux/kernel/git/davem/net.g
On Sat, Dec 03, 2016 at 03:24:28PM -0800, Eric Dumazet wrote:
> Wait a minute. If you use fq on the receiver, then maybe your old debian
> kernel did not backport :
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
I checked, and it
On Sat, 2016-12-03 at 15:15 -0800, Eric Dumazet wrote:
> On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
> > On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> > > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > > would help this precise workloa
On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > would help this precise workload
> >
> > https://git.kernel.org/cgit/linux/kernel/git/davem/n
On Sat, 2016-12-03 at 15:02 -0800, Eric Dumazet wrote:
> On Sat, 2016-12-03 at 14:55 -0800, Eric Dumazet wrote:
>
> > Perfect.
> >
> > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> > would help this precise workload
>
> Also it appears the sender uses a lot of relativel
On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
> Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> would help this precise workload
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=32b3e08fff60494cd1d281a39b51583edfd2b18f
>
> Maybe you
On Sat, 2016-12-03 at 14:55 -0800, Eric Dumazet wrote:
> Perfect.
>
> Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
> would help this precise workload
Also it appears the sender uses a lot of relatively small segments (8220
bytes at a time), with PSH, so GRO wont be able
On Sat, 2016-12-03 at 23:13 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 01:50:37PM -0800, Eric Dumazet wrote:
> >> Note, the tcpdump is done at the receiver. I don't know if this changes the
> >> analysis.
> > If you have access to the receiver, I would be interested to know
> > NI
On Sat, Dec 03, 2016 at 01:50:37PM -0800, Eric Dumazet wrote:
>> Note, the tcpdump is done at the receiver. I don't know if this changes the
>> analysis.
> If you have access to the receiver, I would be interested to know
> NIC/driver used there ?
root@blackhole:~# lspci | grep Ethernet
01:00.0 E
On Sat, 2016-12-03 at 22:34 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 01:07:40PM -0800, Eric Dumazet wrote:
> > What I meant was that we receive ACKS in bursts, with huge gaps between
> > them.
>
> Note, the tcpdump is done at the receiver. I don't know if this changes the
> ana
> On 3 Dec, 2016, at 23:07, Eric Dumazet wrote:
>
> I do not attend IETF meetings so maybe my words are not exact.
>
> What I meant was that we receive ACKS in bursts, with huge gaps between
> them.
>
> Look at tsval/tsecr, and that all ACKS are received in a 20 usec time
> window, covering da
On Sat, Dec 03, 2016 at 01:07:40PM -0800, Eric Dumazet wrote:
> What I meant was that we receive ACKS in bursts, with huge gaps between
> them.
Note, the tcpdump is done at the receiver. I don't know if this changes the
analysis.
/* Steinar */
--
Homepage: https://www.sesse.net/
On Sat, Dec 03, 2016 at 12:20:15PM -0800, Eric Dumazet wrote:
> Just to be clear,what is the kernel version at the sender ?
4.9.0-rc2.
/* Steinar */
--
Homepage: https://www.sesse.net/
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bu
On Sat, 2016-12-03 at 22:26 +0200, Jonathan Morton wrote:
> > On 3 Dec, 2016, at 22:20, Eric Dumazet wrote:
> >
> > Huge ACK decimation it seems.
>
> That extract does not show ACK decimation. It shows either jumbo
> frames or offload aggregation in a send burst, and ordinary
> delayed-acks eac
> On 3 Dec, 2016, at 22:20, Eric Dumazet wrote:
>
> Huge ACK decimation it seems.
That extract does not show ACK decimation. It shows either jumbo frames or
offload aggregation in a send burst, and ordinary delayed-acks each covering at
most two packets received. Nothing particularly weird
On Sat, 2016-12-03 at 20:13 +0100, Steinar H. Gunderson wrote:
> On Sat, Dec 03, 2016 at 08:03:50AM -0500, Neal Cardwell wrote:
> > Thanks for the report, Steinar. This is the first report we've had
> > like this, but it would be interesting to find out what's going on.
> >
> > Even if you don't h
On Sat, Dec 03, 2016 at 08:03:50AM -0500, Neal Cardwell wrote:
> Thanks for the report, Steinar. This is the first report we've had
> like this, but it would be interesting to find out what's going on.
>
> Even if you don't have time to apply the patches Eric mentions, it
> would be hugely useful
Thanks for the report, Steinar. This is the first report we've had
like this, but it would be interesting to find out what's going on.
Even if you don't have time to apply the patches Eric mentions, it
would be hugely useful if the next time you have a slow transfer like
that you could post a link
On Fri, 2016-12-02 at 23:40 +0100, Steinar H. Gunderson wrote:
> On Fri, Dec 02, 2016 at 05:22:23PM -0500, Neal Cardwell wrote:
> > Of course, if we find important use cases that don't work with BBR, we will
> > see what we can do to make BBR work well with them.
>
> I have one thing that I _wonde
On Fri, Dec 02, 2016 at 05:22:23PM -0500, Neal Cardwell wrote:
> Of course, if we find important use cases that don't work with BBR, we will
> see what we can do to make BBR work well with them.
I have one thing that I _wonder_ if could be BBR's fault: I run backup over
SSH. (That would be tar + g
On Fri, Dec 2, 2016 at 3:32 PM, Jonathan Morton
wrote:
>
> > On 2 Dec, 2016, at 21:15, Aaron Wood wrote:
> >
> > So, how is this likely to be playing with our qos_scripts and with cake?
>
> Cake’s deficit-mode shaper behaves fairly closely like an ideal
> constant-throughput link, which is what
> On 2 Dec, 2016, at 21:15, Aaron Wood wrote:
>
> So, how is this likely to be playing with our qos_scripts and with cake?
Cake’s deficit-mode shaper behaves fairly closely like an ideal
constant-throughput link, which is what BBR is supposedly designed for. I
haven’t read that far in the pa
This is really fascinating reading.
The following made me stop for a second, though:
"The bucket is typically full at connection startup so BBR learns the
underlying network's BtlBw, but once the bucket empties, all packets sent
faster than the (much lower than BtlBw) bucket fill rate are dropped
http://queue.acm.org/detail.cfm?id=3022184
--
Dave Täht
Let's go make home routers and wifi faster! With better software!
http://blog.cerowrt.org
___
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat
37 matches
Mail list logo