Probably there should be one from me - but I need to have built and tried
it myself first, to be sure. There's still some new code I haven't gone
over in detail yet.
- Jonathan Morton
___
Cake mailing list
Cake@lists.bufferbloat.net
And git bisect gave:
user@work-horse:~/CODE/sch_cake$ git bisect bad
031998e4eee58cbc706711eba8c54684f07306be is the first bad commit
commit 031998e4eee58cbc706711eba8c54684f07306be
Author: Dave Taht
Date: Sun Nov 19 19:02:06 2017 -0800
sch_cake: make more checkpatch
On Thu, Nov 23, 2017 at 9:44 AM, Kevin Darbyshire-Bryant
wrote:
>
>
>> On 23 Nov 2017, at 17:07, Dave Taht wrote:
>>
>> Kevin Darbyshire-Bryant writes:
>>>
>>> Just did a PR for turning max_skblen (or whatever it is)
> On 23 Nov 2017, at 17:07, Dave Taht wrote:
>
> Kevin Darbyshire-Bryant writes:
>>
>> Just did a PR for turning max_skblen (or whatever it is) to a u32….since
>> there *are* super packets out there >64KB.
>
> There are?
There are! Well there
Kevin Darbyshire-Bryant writes:
>> On 21 Nov 2017, at 21:59, Dave Taht wrote:
>>
>>
>> You will want to pull and rebase on top of this.
>>
>> Are there any other patches still lying out of tree worth considering?
>
> Just did a PR for turning
Pete Heist writes:
> On Nov 23, 2017, at 10:44 AM, Jonathan Morton
> wrote:
>
> This is most likely an interaction of the AQM with Linux' scheduling
> latency.
>
> At the 'lan' setting, the time comstants are similar in magnitude
Sebastian Moeller writes:
>> On Nov 23, 2017, at 17:21, Dave Taht wrote:
>>
>> On Thu, Nov 23, 2017 at 1:36 AM, Pete Heist wrote:
>>>
On Nov 23, 2017, at 10:30 AM, Pete Heist wrote:
Thanks for
> On Nov 23, 2017, at 17:21, Dave Taht wrote:
>
> On Thu, Nov 23, 2017 at 1:36 AM, Pete Heist wrote:
>>
>>> On Nov 23, 2017, at 10:30 AM, Pete Heist wrote:
>>>
>>> Thanks for the overhead info. I used that in my latest tests.
I'm going to take a break from cleaning up the cobalt branch til
sunday before putting together a "final" patch for net-next. (which
should open up by sunday)
If anyone else would like to tackle:
* getting ack_drop into the drop statistics
* updating the man page
* figuring out hard_header_len
On Thu, Nov 23, 2017 at 1:36 AM, Pete Heist wrote:
>
>> On Nov 23, 2017, at 10:30 AM, Pete Heist wrote:
>>
>> Thanks for the overhead info. I used that in my latest tests. That makes me
>> wonder if those overheads could be defaulted when Cake knows
> On 21 Nov 2017, at 21:59, Dave Taht wrote:
>
>
> You will want to pull and rebase on top of this.
>
> Are there any other patches still lying out of tree worth considering?
Just did a PR for turning max_skblen (or whatever it is) to a u32….since there
*are* super packets
> On Nov 23, 2017, at 10:44 AM, Jonathan Morton wrote:
> This is most likely an interaction of the AQM with Linux' scheduling latency.
>
> At the 'lan' setting, the time comstants are similar in magnitude to the
> delays induced by Linux itself, so congestion might be
Hi Pete,
> On Nov 23, 2017, at 10:30, Pete Heist wrote:
>
>
>> On Nov 23, 2017, at 9:00 AM, Sebastian Moeller wrote:
>>
>> Hi Pete,
>>
>> I should have mentioned "overhead 64 mpu 84" only make sense in
>> combination with a shaper limit (well,
This is most likely an interaction of the AQM with Linux' scheduling
latency.
At the 'lan' setting, the time comstants are similar in magnitude to the
delays induced by Linux itself, so congestion might be signalled
prematurely. The flows will then become sparse and total throughput
reduced,
> On Nov 23, 2017, at 10:30 AM, Pete Heist wrote:
>
> Thanks for the overhead info. I used that in my latest tests. That makes me
> wonder if those overheads could be defaulted when Cake knows Ethernet is
> being used with rate limiting? I know a goal is to make cake
> On Nov 23, 2017, at 9:00 AM, Sebastian Moeller wrote:
>
> Hi Pete,
>
> I should have mentioned "overhead 64 mpu 84" only make sense in
> combination with a shaper limit (well, they will make sure the cake
> statistics will be more reflective of what is happening on
It seems that the ‘lan’ keyword (and probably other lower rtt settings in
general) may adversely impact host fairness in some cases. Is this to be
expected? I set up a fairness test with rrul_be_nflows where one client has 2/2
TCP flows and the other has 8/8 TCP flows, then ran five tests:
Hi Pete,
> On Nov 22, 2017, at 19:43, Pete Heist wrote:
>
>
>> On Nov 22, 2017, at 7:33 PM, Dave Taht wrote:
>>
>> On Wed, Nov 22, 2017 at 4:37 AM, Pete Heist wrote:
>>>
>>> Ok, at least a little crude testing with sar:
>>>
18 matches
Mail list logo