>    http://storage.sesse.net/bbr.pcap -- ssh+tar+gnupg

I agree with Eric that for the ssh+tar+gnupg case the ACK stream seems
like the culprit here. After about 1 second, the ACKs are suddenly
very stretched and very delayed (often more than 100ms). See the
attached screen shots.

I like Eric's theory that the ACKs might be going through fq.
Particularly since the uplink data starts having issues around the
same time as the ACKs for the downlink data.

neal

On Sat, Dec 3, 2016 at 6:24 PM, Eric Dumazet <eric.duma...@gmail.com> wrote:
> On Sat, 2016-12-03 at 15:15 -0800, Eric Dumazet wrote:
>> On Sun, 2016-12-04 at 00:03 +0100, Steinar H. Gunderson wrote:
>> > On Sat, Dec 03, 2016 at 02:55:37PM -0800, Eric Dumazet wrote:
>> > > Note that starting from linux-4.4, e1000e gets gro_flush_timeout that
>> > > would help this precise workload
>> > >
>> > > https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=32b3e08fff60494cd1d281a39b51583edfd2b18f
>> > >
>> > > Maybe you can redo the experiment in ~5 years when distro catches up ;)
>> >
>> > I can always find a backport, assuming the IPMI is still working. But it
>> > really wasn't like this earlier :-) Perhaps something changed on the path,
>> > unrelated to BBR.
>>
>> If the tcpdump is taken on receiver, how would you explain these gaps ?
>> Like the NIC was frozen for 40 ms  !
>
> Wait a minute. If you use fq on the receiver, then maybe your old debian
> kernel did not backport :
>
> https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=9878196578286c5ed494778ada01da094377a686
>
>
>
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to