I'll hit it from Comcast's ~150Mbps service on the peninsula when I get
home today (with and without sqm)
-Aaron
On Mon, Sep 26, 2016 at 2:38 PM, Dave Taht wrote:
> I just put a netperf server up in linode's freemont, ca, cloud (kvm
> paravirtualized hardware), with sch_fq enabled, ecn disabled
I just put a netperf server up in linode's freemont, ca, cloud (kvm
paravirtualized hardware), with sch_fq enabled, ecn disabled, and bbr
as the default cc. (reno and cubic are also allowed)
I am curious if y'all hit it with your rrul, tcp_ndown,
rtt_fair_var_down (vs flent-freemont in the same dc
Thanks! And sorry that I missed the sample code in the patch.
On Mon, Sep 26, 2016 at 12:30 Neal Cardwell wrote:
> On Mon, Sep 26, 2016 at 2:47 PM, Aaron Wood wrote:
> > Dumb question on this: The tcp_bbr_info struct for a socket can be
> > inspected at runtime through the ss utility or through
On Mon, Sep 26, 2016 at 2:47 PM, Aaron Wood wrote:
> Dumb question on this: The tcp_bbr_info struct for a socket can be
> inspected at runtime through the ss utility or through a get socket opts
> call, right?
Yes, you can use either approach:
(1) from code you can use TCP_CC_INFO socket option
Dumb question on this: The tcp_bbr_info struct for a socket can be
inspected at runtime through the ss utility or through a get socket opts
call, right?
-Aaron
On Sat, Sep 17, 2016 at 11:34 AM, Maciej Soltysiak
wrote:
> Hi,
>
> Just saw this: https://patchwork.ozlabs.org/patch/671069/
>
> Inte
On Wed, 21 Sep 2016, Dave Taht wrote:
* It seriously outcompetes cubic, particularly on the single queue aqms.
fq_codel is fine. I need to take apart the captures to see how well it
is behaving in this case. My general hope was that with fq in place,
anything that was delay based worked better
On Wed, 21 Sep 2016, Dave Taht wrote:
I did a fairly comprehensive string of tests today, comparing it at
20Mbits, 48ms RTT, to cubic and competing with cubic, against a byte
fifo of 256k, pie, cake, cake flowblind, and fq_codel.
20 megabit/s is 2.5 megabyte/s, so that 256k FIFO is only 100ms
On Wed, 21 Sep 2016, Alan Jenkins wrote:
That assumes the measured maximum bandwidth (over an interval of 10*rtt)
remains constant. (Say there were 100 BBR flows, then you added one
CUBIC flow to bloat the buffer). I don't have a good intuition for how
the bandwidth estimation behaves in gen
On Wed, Sep 21, 2016 at 3:15 AM, Mikael Abrahamsson wrote:
> On Wed, 21 Sep 2016, Dave Taht wrote:
>
>> I dunno, I'm just reading tea leaves here!
>>
>> can't wait for the paper!
>
>
> +1.
>
> I would like to understand how BBR interacts with a window-fully-open
> classic TCP session and FIFO indu
On 21/09/2016, Mikael Abrahamsson wrote:
> On Wed, 21 Sep 2016, Dave Taht wrote:
>
>> I dunno, I'm just reading tea leaves here!
>>
>> can't wait for the paper!
>
> +1.
>
> I would like to understand how BBR interacts with a window-fully-open
> classic TCP session and FIFO induced delay that is in
On Wed, 21 Sep 2016, Dave Taht wrote:
I dunno, I'm just reading tea leaves here!
can't wait for the paper!
+1.
I would like to understand how BBR interacts with a window-fully-open
classic TCP session and FIFO induced delay that is in steady-state before
the BBR session starts.
So let's
On 21/09/16 10:39, Dave Taht wrote:
On Wed, Sep 21, 2016 at 2:06 AM, Alan Jenkins
wrote:
are we sure - that's a fairly different algorithm and different expansion of
the acronym...
Yes it is very different,
My cryptic conclusion was a common group of names were involved in both
projects (in
On Wed, Sep 21, 2016 at 2:06 AM, Alan Jenkins
wrote:
> On 17/09/16 19:53, Dave Taht wrote:
>>
>> BBR is pretty awesome, and it's one of the reasons why I stopped
>> sweating inbound rate limiting + fq_codel as much as I used to. I have
>> a blog entry pending on this but wasn't expecting the code
On 17/09/16 19:53, Dave Taht wrote:
BBR is pretty awesome, and it's one of the reasons why I stopped
sweating inbound rate limiting + fq_codel as much as I used to. I have
a blog entry pending on this but wasn't expecting the code to be
released before the paper was... and all I had to go on til
> On 17 Sep, 2016, at 21:34, Maciej Soltysiak wrote:
>
> Cake and fq_codel work on all packets and aim to signal packet loss early to
> network stacks by dropping; BBR works on TCP and aims to prevent packet loss.
By dropping, *or* by ECN marking. The latter avoids packet loss.
- Jonathan
BBR is pretty awesome, and it's one of the reasons why I stopped
sweating inbound rate limiting + fq_codel as much as I used to. I have
a blog entry pending on this but wasn't expecting the code to be
released before the paper was... and all I had to go on til yesterday
was Nowlan's dissertation:
Hi,
Just saw this: https://patchwork.ozlabs.org/patch/671069/
Interested to see how BBR would play out with things like fq_codel or cake.
"loss-based congestion control is unfortunately out-dated in today's
networks. On
today's Internet, loss-based congestion control causes the infamous
bufferbl
17 matches
Mail list logo